This changes the behavior of constructing MLIRContext to no longer load globally
registered dialects on construction. Instead Dialects are only loaded explicitly
on demand:
- the Parser is lazily loading Dialects in the context as it encounters them
during parsing. This is the only purpose for registering dialects and not load
them in the context.
- Passes are expected to declare the dialects they will create entity from
(Operations, Attributes, or Types), and the PassManager is loading Dialects into
the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only
need to load the dialect for the IR it will emit, and the optimizer is
self-contained and load the required Dialects. For example in the Toy tutorial,
the compiler only needs to load the Toy dialect in the Context, all the others
(linalg, affine, std, LLVM, ...) are automatically loaded depending on the
optimization pipeline enabled.
To adjust to this change, stop using the existing dialect registration: the
global registry will be removed soon.
1) For passes, you need to override the method:
virtual void getDependentDialects(DialectRegistry ®istry) const {}
and registery on the provided registry any dialect that this pass can produce.
Passes defined in TableGen can provide this list in the dependentDialects list
field.
2) For dialects, on construction you can register dependent dialects using the
provided MLIRContext: `context.getOrLoadDialect<DialectName>()`
This is useful if a dialect may canonicalize or have interfaces involving
another dialect.
3) For loading IR, dialect that can be in the input file must be explicitly
registered with the context. `MlirOptMain()` is taking an explicit registry for
this purpose. See how the standalone-opt.cpp example is setup:
mlir::DialectRegistry registry;
mlir::registerDialect<mlir::standalone::StandaloneDialect>();
mlir::registerDialect<mlir::StandardOpsDialect>();
Only operations from these two dialects can be in the input file. To include all
of the dialects in MLIR Core, you can populate the registry this way:
mlir::registerAllDialects(registry);
4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in
the context before emitting the IR: context.getOrLoadDialect<ToyDialect>()
This changes the behavior of constructing MLIRContext to no longer load globally registered dialects on construction. Instead Dialects are only loaded explicitly on demand:
- the Parser is lazily loading Dialects in the context as it encounters them during parsing. This is the only purpose for registering dialects and not load them in the context.
- Passes are expected to declare the dialects they will create entity from (Operations, Attributes, or Types), and the PassManager is loading Dialects into the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only need to load the dialect for the IR it will emit, and the optimizer is self-contained and load the required Dialects. For example in the Toy tutorial, the compiler only needs to load the Toy dialect in the Context, all the others (linalg, affine, std, LLVM, ...) are automatically loaded depending on the optimization pipeline enabled.
Differential Revision: https://reviews.llvm.org/D85622
This changes the behavior of constructing MLIRContext to no longer load globally registered dialects on construction. Instead Dialects are only loaded explicitly on demand:
- the Parser is lazily loading Dialects in the context as it encounters them during parsing. This is the only purpose for registering dialects and not load them in the context.
- Passes are expected to declare the dialects they will create entity from (Operations, Attributes, or Types), and the PassManager is loading Dialects into the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only need to load the dialect for the IR it will emit, and the optimizer is self-contained and load the required Dialects. For example in the Toy tutorial, the compiler only needs to load the Toy dialect in the Context, all the others (linalg, affine, std, LLVM, ...) are automatically loaded depending on the optimization pipeline enabled.
Due to the original type system implementation, LLVMDialect in MLIR contains an
LLVMContext in which the relevant objects (types, metadata) are created. When
an MLIR module using the LLVM dialect (and related intrinsic-based dialects
NVVM, ROCDL, AVX512) is converted to LLVM IR, it could only live in the
LLVMContext owned by the dialect. The type system no longer relies on the
LLVMContext, so this limitation can be removed. Instead, translation functions
now take a reference to an LLVMContext in which the LLVM IR module should be
constructed. The caller of the translation functions is responsible for
ensuring the same LLVMContext is not used concurrently as the translation no
longer uses a dialect-wide context lock.
As an additional bonus, this change removes the need to recreate the LLVM IR
module in a different LLVMContext through printing and parsing back, decreasing
the compilation overhead in JIT and GPU-kernel-to-blob passes.
Reviewed By: rriddle, mehdi_amini
Differential Revision: https://reviews.llvm.org/D85443
Summary:
The "i1" (viz. bool) type does not have a proper equivalent on the "C"
size. So, to avoid any ABIs issues, we simply use print_i32 on an i32
value of one or zero for true and false. This has the added advantage
that one less function needs to be implemented when porting the runtime
support library.
Reviewers: ftynse, bkramer, nicolasvasilache
Reviewed By: ftynse
Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, stephenneuendorffer, Joonsoo, grosul1, frgossen, Kayjukh, jurahul, msifontes
Tags: #mlir
Differential Revision: https://reviews.llvm.org/D82048
Summary:
Two integration tests focused on i1 vectors, which exposed omissions
in the llvm backend which have since then been fixed. Note that this also
exposed an inaccuracy for print_i1 which has been fixed in this CL:
for a pure C ABI, int should be used rather than bool.
Reviewers: nicolasvasilache, ftynse, reidtatge, andydavis1, bkramer
Reviewed By: bkramer
Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, stephenneuendorffer, Joonsoo, grosul1, frgossen, Kayjukh, jurahul, msifontes
Tags: #mlir
Differential Revision: https://reviews.llvm.org/D81957
Summary:
Add DynamicMemRefType which can reference one of the statically ranked StridedMemRefType or a UnrankedMemRefType so that runner utils only need to be implemented once.
There is definitely room for more clean up and unification, but I will keep that for follow-ups.
Reviewers: nicolasvasilache
Reviewed By: nicolasvasilache
Subscribers: mehdi_amini, rriddle, jpienaar, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, stephenneuendorffer, Joonsoo, grosul1, frgossen, Kayjukh, jurahul, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D80513
MLIR tests in "mlir/test/mlir-cpu-runner" fails in SystemZ (z14) because
of incompatible datalayout error. This patch fixes it by setting host
CPU name in createTargetMachine()
Differential Revision: https://reviews.llvm.org/D80130
Generally:
1) don't use target_link_libraries() and add_mlir_library() on the same target, use LINK_LIBS PUBLIC instead.
2) don't use LINK_LIBS to specify LLVM libraries. Use LINK_COMPONENTS instead
3) no need to link against LLVMSupport. We pull it in by default.
Differential Revision: https://reviews.llvm.org/D80076
The JitRunner library is logically very close to the execution engine,
and shares similar dependencies.
find -name "*.cpp" -exec sed -i "s/Support\/JitRunner/ExecutionEngine\/JitRunner/" "{}" \;
Differential Revision: https://reviews.llvm.org/D79899
Portions of MLIR which depend on LLVMIR generally need to depend on
intrinsics_gen, to ensure that tablegen'd header files from LLVM are built
first. Without this, we get errors, typically about llvm/IR/Attributes.inc
not being found.
Note that previously the Linalg Dialect depended on intrinsics_gen, but it
doesn't need to, since it doesn't use LLVMIR.
Differential Revision: https://reviews.llvm.org/D79389
- Exports MLIR targets to be used out-of-tree.
- mimicks `add_clang_library` and `add_flang_library`.
- Fixes libMLIR.so
After https://reviews.llvm.org/D77515 libMLIR.so was no longer containing
any object files. We originally had a cludge there that made it work with
the static initalizers and when switchting away from that to the way the
clang shlib does it, I noticed that MLIR doesn't create a `obj.{name}` target,
and doesn't export it's targets to `lib/cmake/mlir`.
This is due to MLIR using `add_llvm_library` under the hood, which adds
the target to `llvmexports`.
Differential Revision: https://reviews.llvm.org/D78773
[MLIR] Fix libMLIR.so and LLVM_LINK_LLVM_DYLIB
Primarily, this patch moves all mlir references to LLVM libraries into
either LLVM_LINK_COMPONENTS or LINK_COMPONENTS. This enables magic in
the llvm cmake files to automatically replace reference to LLVM components
with references to libLLVM.so when necessary. Among other things, this
completes fixing libMLIR.so, which has been broken for some configurations
since D77515.
Unlike previously, the pattern is now that mlir libraries should almost
always use add_mlir_library. Previously, some libraries still used
add_llvm_library. However, this confuses the export of targets for use
out of tree because libraries specified with add_llvm_library are exported
by LLVM. Instead users which don't need/can't be linked into libMLIR.so
can specify EXCLUDE_FROM_LIBMLIR
A common error mode is linking with LLVM libraries outside of LINK_COMPONENTS.
This almost always results in symbol confusion or multiply defined options
in LLVM when the same object file is included as a static library and
as part of libLLVM.so. To catch these errors more directly, there's now
mlir_check_all_link_libraries.
To simplify usage of add_mlir_library, we assume that all mlir
libraries depend on LLVMSupport, so it's not necessary to separately specify
it.
tested with:
BUILD_SHARED_LIBS=on,
BUILD_SHARED_LIBS=off + LLVM_BUILD_LLVM_DYLIB,
BUILD_SHARED_LIBS=off + LLVM_BUILD_LLVM_DYLIB + LLVM_LINK_LLVM_DYLIB.
By: Stephen Neuendorffer <stephen.neuendorffer@xilinx.com>
Differential Revision: https://reviews.llvm.org/D79067
[MLIR] Move from using target_link_libraries to LINK_LIBS
This allows us to correctly generate dependencies for derived targets,
such as targets which are created for object libraries.
By: Stephen Neuendorffer <stephen.neuendorffer@xilinx.com>
Differential Revision: https://reviews.llvm.org/D79243
Three commits have been squashed to avoid intermediate build breakage.
This change makes the ModuleTranslation threadsafe by locking on the
LLVMContext. Furthermore, we now clone the llvm module into a new
context when compiling to PTX similar to what the OrcJit does.
Differential Revision: https://reviews.llvm.org/D78207
This fixes a number of warnings, where a function is re-defined after it is tagged as "being imported":
D:\llvm-project\mlir\lib\ExecutionEngine\CRunnerUtils.cpp(24,17): warning: 'print_i32' redeclared without 'dllimport' attribute: 'dllexport' attribute added [-Winconsistent-dllimport]
extern "C" void print_i32(int32_t i) { fprintf(stdout, "%" PRId32, i); }
^
D:\llvm-project\mlir\include\mlir/ExecutionEngine/CRunnerUtils.h(168,42): note: previous declaration is here
extern "C" MLIR_CRUNNERUTILS_EXPORT void print_i32(int32_t i);
^
Differential Revision: https://reviews.llvm.org/D76654
Summary:
The C runner utils API was still not vanilla enough for certain use
cases on embedded ARM SDKs, this enables such cases.
Adding people more widely for historical Windows related build issues.
Differential Revision: https://reviews.llvm.org/D76031
Summary:
This patch add some builtin operation for the gpu.all_reduce ops.
- for Integer only: `and`, `or`, `xor`
- for Float and Integer: `min`, `max`
This is useful for higher level dialect like OpenACC or OpenMP that can lower to the GPU dialect.
Differential Revision: https://reviews.llvm.org/D75766
Summary:
This patch add some builtin operation for the gpu.all_reduce ops.
- for Integer only: `and`, `or`, `xor`
- for Float and Integer: `min`, `max`
This is useful for higher level dialect like OpenACC or OpenMP that can lower to the GPU dialect.
Differential Revision: https://reviews.llvm.org/D75766
Summary:
This way, clients can opt-out of the GDB notification listener. Also, this
changes the semantics of enabling the object cache, which seemed the wrong
way around.
Reviewers: rriddle, nicolasvasilache, ftynse, andydavis1
Reviewed By: nicolasvasilache
Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, nicolasvasilache, arpith-jacob, mgester, lucyrfox, liufengdb, Joonsoo, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D75787
Summary:
On Windows, building `mlir_c_runner_utils` doesn't properly export
symbols, thus resulting in an implib not being created, which causes
an error when consuming LLVM from external projects.
Differential Revision: https://reviews.llvm.org/D75769
Putting this up mainly for discussion on
how this should be done. I am interested in MLIR from
the Julia side and we currently have a strong preference
to dynamically linking against the LLVM shared library,
and would like to have a MLIR shared library.
This patch adds a new cmake function add_mlir_library()
which accumulates a list of targets to be compiled into
libMLIR.so. Note that not all libraries make sense to
be compiled into libMLIR.so. In particular, we want
to avoid libraries which primarily exist to support
certain tools (such as mlir-opt and mlir-cpu-runner).
Note that the resulting libMLIR.so depends on LLVM, but
does not contain any LLVM components. As a result, it
is necessary to link with libLLVM.so to avoid linkage
errors. So, libMLIR.so requires LLVM_BUILD_LLVM_DYLIB=on
FYI, Currently it appears that LLVM_LINK_LLVM_DYLIB is broken
because mlir-tblgen is linked against libLLVM.so and
and independent LLVM components.
Previous version of this patch broke depencies on TableGen
targets. This appears to be because it compiled all
libraries to OBJECT libraries (probably because cmake
is generating different target names). Avoiding object
libraries results in correct dependencies.
(updated by Stephen Neuendorffer)
Differential Revision: https://reviews.llvm.org/D73130
CMake allows calling target_link_libraries() without a keyword,
but this usage is not preferred when also called with a keyword,
and has surprising behavior. This patch explicitly specifies a
keyword when using target_link_libraries().
Differential Revision: https://reviews.llvm.org/D75725
MLIR ExecutionEngine and derived tools (e.g., mlir-cpu-runner) would trigger an
assertion inside ORC JIT while ExecutionEngine is being destructed after a
failed linking due to a missing function definition. The reason for this is the
JIT lookup that may return an Error referring to strings stored internally by
the JIT. If the Error outlives the ExecutionEngine, it would want have a
dangling reference, which is currently caught by an assertion inside JIT thanks
to hand-rolled reference counting. Rewrap the error message into a string
before returning.
Differential Revision: https://reviews.llvm.org/D75508
This revision adds a static `mlir_c_runner_utils_static` library
for the sole purpose of being linked into `mlir_runner_utils` on
Windows.
It was previously reported that:
```
`add_llvm_library(mlir_c_runner_utils SHARED CRunnerUtils.cpp)`
produces *only* a dll on windows, the linking of mlir_runner_utils fails
because target_link_libraries is looking for a .lib file as opposed to a
.dll file. I think this may be a case where either we need to use
LINK_LIBS or explicitly build a static lib as well, but I haven't tried
either yet.
```
This revision adds a static `mlir_c_runner_utils_static` library
for the sole purpose of being linked into `mlir_runner_utils` on
Windows.
It was previously reported that:
```
`add_llvm_library(mlir_c_runner_utils SHARED CRunnerUtils.cpp)`
produces *only* a dll on windows, the linking of mlir_runner_utils fails
because target_link_libraries is looking for a .lib file as opposed to a
.dll file. I think this may be a case where either we need to use
LINK_LIBS or explicitly build a static lib as well, but I haven't tried
either yet.
```
Putting this up mainly for discussion on
how this should be done. I am interested in MLIR from
the Julia side and we currently have a strong preference
to dynamically linking against the LLVM shared library,
and would like to have a MLIR shared library.
This patch adds a new cmake function add_mlir_library()
which accumulates a list of targets to be compiled into
libMLIR.so. Note that not all libraries make sense to
be compiled into libMLIR.so. In particular, we want
to avoid libraries which primarily exist to support
certain tools (such as mlir-opt and mlir-cpu-runner).
Note that the resulting libMLIR.so depends on LLVM, but
does not contain any LLVM components. As a result, it
is necessary to link with libLLVM.so to avoid linkage
errors. So, libMLIR.so requires LLVM_BUILD_LLVM_DYLIB=on
FYI, Currently it appears that LLVM_LINK_LLVM_DYLIB is broken
because mlir-tblgen is linked against libLLVM.so and
and independent LLVM components.
Previous version of this patch broke depencies on TableGen
targets. This appears to be because it compiled all
libraries to OBJECT libraries (probably because cmake
is generating different target names). Avoiding object
libraries results in correct dependencies.
(updated by Stephen Neuendorffer)
Differential Revision: https://reviews.llvm.org/D73130
When compiling libLLVM.so, add_llvm_library() manipulates the link libraries
being used. This means that when using add_llvm_library(), we need to pass
the list of libraries to be linked (using the LINK_LIBS keyword) instead of
using the standard target_link_libraries call. This is preparation for
properly dealing with creating libMLIR.so as well.
Differential Revision: https://reviews.llvm.org/D74864
Putting this up mainly for discussion on
how this should be done. I am interested in MLIR from
the Julia side and we currently have a strong preference
to dynamically linking against the LLVM shared library,
and would like to have a MLIR shared library.
This patch adds a new cmake function add_mlir_library()
which accumulates a list of targets to be compiled into
libMLIR.so. Note that not all libraries make sense to
be compiled into libMLIR.so. In particular, we want
to avoid libraries which primarily exist to support
certain tools (such as mlir-opt and mlir-cpu-runner).
Note that the resulting libMLIR.so depends on LLVM, but
does not contain any LLVM components. As a result, it
is necessary to link with libLLVM.so to avoid linkage
errors. So, libMLIR.so requires LLVM_BUILD_LLVM_DYLIB=on
FYI, Currently it appears that LLVM_LINK_LLVM_DYLIB is broken
because mlir-tblgen is linked against libLLVM.so and
and independent LLVM components
(updated by Stephen Neuendorffer)
Differential Revision: https://reviews.llvm.org/D73130
When compiling libLLVM.so, add_llvm_library() manipulates the link libraries
being used. This means that when using add_llvm_library(), we need to pass
the list of libraries to be linked (using the LINK_LIBS keyword) instead of
using the standard target_link_libraries call. This is preparation for
properly dealing with creating libMLIR.so as well.
Differential Revision: https://reviews.llvm.org/D74864
Summary:
This revision split out a new CRunnerUtils library that supports
MLIR execution on targets without a C++ runtime.
Differential Revision: https://reviews.llvm.org/D75257
This fixes test failures caused by a change to the name of the main
dylib, now called 'main'. It also hardens the engine against potential
future changes to the name.
Summary:
This patch is a step towards enabling BUILD_SHARED_LIBS=on, which
builds most libraries as DLLs instead of statically linked libraries.
The main effect of this is that incremental build times are greatly
reduced, since usually only one library need be relinked in response
to isolated code changes.
The bulk of this patch is fixing incorrect usage of cmake, where library
dependencies are listed under add_dependencies rather than under
target_link_libraries or under the LINK_LIBS tag. Correct usage should be
like this:
add_dependencies(MLIRfoo MLIRfooIncGen)
target_link_libraries(MLIRfoo MLIRlib1 MLIRlib2)
A separate issue is that in cmake, dependencies between static libraries
are automatically included in dependencies. In the above example, if MLIBlib1
depends on MLIRlib2, then it is sufficient to have only MLIRlib1 in the
target_link_libraries. When compiling with shared libraries, it is necessary
to have both MLIRlib1 and MLIRlib2 specified if MLIRfoo uses symbols from both.
Reviewers: mravishankar, antiagainst, nicolasvasilache, vchuravy, inouehrs, mehdi_amini, jdoerfert
Reviewed By: nicolasvasilache, mehdi_amini
Subscribers: Joonsoo, merge_guards_bot, jholewinski, mgorny, mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, csigg, arpith-jacob, mgester, lucyrfox, herhut, aartbik, liufengdb, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D73653
This is how it should've been and brings it more in line with
std::string_view. There should be no functional change here.
This is mostly mechanical from a custom clang-tidy check, with a lot of
manual fixups. It uncovers a lot of minor inefficiencies.
This doesn't actually modify StringRef yet, I'll do that in a follow-up.
Changes to ORC in ce2207abaf changed the
APIs in IRCompileLayer, now requiring the custom compiler to be wrapped
in IRCompileLayer::IRCompiler. Even though MLIR relies on Orc
CompileUtils, the type is still visible in several places in the code.
Adapt those to the new API.
* Fixes use of anonymous namespace for static methods.
* Uses explicit qualifiers(mlir::) instead of wrapping the definition with the namespace.
PiperOrigin-RevId: 286222654
The ExecutionEngine was updated recently to only take the LLVM dialect as
input. Memrefs are no longer expected in the signature of the entry point
function by the executor so there is no need to allocate and free them. The
code in MemRefUtils is therefore dead and furthermore out of sync with the
recent evolution of memref type to support strides. Drop it.
PiperOrigin-RevId: 276272302
- the JIT codegen was being run at the default -O0 level; instead,
propagate the opt level from the cmd line.
Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>
Closestensorflow/mlir#123
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/123 from bondhugula:jit-runner 3b055e47f94c9a48bf487f6400787478738cda02
PiperOrigin-RevId: 267778586
The refactoring of ExecutionEngine dropped the usage of the irTransform function used to pass -O3 and other options to LLVM. As a consequence, the proper optimizations do not kick in in LLMV-land.
This CL makes use of the transform function and allows producing avx512 instructions, on an internal example, when using:
`mlir-cpu-runner -dump-object-file=1 -object-filename=foo.o` combined with `objdump -D foo.o`.
Assembly produced resembles:
```
2b2e: 62 72 7d 48 18 04 0e vbroadcastss (%rsi,%rcx,1),%zmm8
2b35: 62 71 7c 48 28 ce vmovaps %zmm6,%zmm9
2b3b: 62 72 3d 48 a8 c9 vfmadd213ps %zmm1,%zmm8,%zmm9
2b41: 62 f1 7c 48 28 cf vmovaps %zmm7,%zmm1
2b47: 62 f2 3d 48 a8 c8 vfmadd213ps %zmm0,%zmm8,%zmm1
2b4d: 62 f2 7d 48 18 44 0e vbroadcastss 0x4(%rsi,%rcx,1),%zmm0
2b54: 01
2b55: 62 71 7c 48 28 c6 vmovaps %zmm6,%zmm8
2b5b: 62 72 7d 48 a8 c3 vfmadd213ps %zmm3,%zmm0,%zmm8
2b61: 62 f1 7c 48 28 df vmovaps %zmm7,%zmm3
2b67: 62 f2 7d 48 a8 da vfmadd213ps %zmm2,%zmm0,%zmm3
2b6d: 62 f2 7d 48 18 44 0e vbroadcastss 0x8(%rsi,%rcx,1),%zmm0
2b74: 02
2b75: 62 f2 7d 48 a8 f5 vfmadd213ps %zmm5,%zmm0,%zmm6
2b7b: 62 f2 7d 48 a8 fc vfmadd213ps %zmm4,%zmm0,%zmm7
```
etc.
Fixestensorflow/mlir#120
PiperOrigin-RevId: 267281097
This commit introduces the bits to be able to dump JIT-compile
objects to external files by passing an object cache to OrcJit.
The new functionality is tested in mlir-cpu-runner under the flag
`dump-object-file`.
Closestensorflow/mlir#95
PiperOrigin-RevId: 266439265
This CL makes use of the standard LLVM LLJIT and removes the need for a custom JIT implementation within MLIR.
To achieve this, one needs to clone (i.e. serde) the produced llvm::Module into a new LLVMContext. This is currently necessary because the llvm::LLVMContext is owned by the LLVMDialect, somewhat deep in the call hierarchy.
In the future we should remove the reliance of serding the llvm::Module by allowing the injection of an LLVMContext from the top-level. Unfortunately this will require deeper API changes and impact multiple places. It is therefore left for future work.
PiperOrigin-RevId: 264737459
Switch to C++14 standard method as llvm::make_unique has been removed (
https://reviews.llvm.org/D66259). Also mark some targets as c++14 to ease next
integrates.
PiperOrigin-RevId: 263953918
LLVM r368707 updated the APIs in llvm::orc::DynamicLibrarySearchGenerator to
use unique_ptr for holding the instance of the generator. Update our uses of
DynamicLibrarySearchGenerator in the ExecutionEngine to reflect that.
PiperOrigin-RevId: 263539855
Many LLVM transformations benefits from knowing the targets. This enables optimizations,
especially in a JIT context when the target is (generally) well-known.
Closestensorflow/mlir#49
PiperOrigin-RevId: 261840617
LLVM r367686 changed the locking scheme to avoid potential deadlocks and the
related llvm::orc::ThreadSafeModule APIs ExecutionEngine was relying upon,
breaking the MLIR build. Update our use of ThreadSafeModule to unbreak the
build.
PiperOrigin-RevId: 261566571
As with Functions, Module will soon become an operation, which are value-typed. This eases the transition from Module to ModuleOp. A new class, OwningModuleRef is provided to allow for owning a reference to a Module, and will auto-delete the held module on destruction.
PiperOrigin-RevId: 256196193
Move the data members out of Function and into a new impl storage class 'FunctionStorage'. This allows for Function to become value typed, which will greatly simplify the transition of Function to FuncOp(given that FuncOp is also value typed).
PiperOrigin-RevId: 255983022
Originally, ExecutionEngine was created before MLIR had a proper pass
management infrastructure or an LLVM IR dialect (using the LLVM target
directly). It has been running a bunch of lowering passes to convert the input
IR from Standard+Affine dialects to LLVM IR and, later, to the LLVM IR dialect.
This is no longer necessary and is even undesirable for compilation flows that
perform their own conversion to the LLVM IR dialect. Drop this integration and
make ExecutionEngine accept only the LLVM IR dialect. Users of the
ExecutionEngine can call the relevant passes themselves.
--
PiperOrigin-RevId: 249004676
This CL performs post-commit cleanups.
It adds the ability to specify which shared libraries to load dynamically in ExecutionEngine. The linalg integration test is updated to use a shared library.
Additional minor cleanups related to LLVM lowering of Linalg are also included.
--
PiperOrigin-RevId: 248346589
This CL extends the execution engine to allow the additional resolution of symbols names
that have been registered explicitly. This allows linking static library symbols that have not been explicitly exported with the -rdynamic linking flag (which is deemed too intrusive).
--
PiperOrigin-RevId: 247969504
LLVM Orc JIT changed the API for DynamicLibrarySearchGenerator::
GetForCurrentProcess to only take one value of the DataLayout that it actually
uses instead of the whole data layout. Update MLIR ExecutionEngine call to
this function accordingly.
--
PiperOrigin-RevId: 244820235
This allows client to be able to reuse the same logic to setup a module
for the ExecutionEngine without instanciating one. One use case is running
the optimization pipeline but not JIT-ing.
--
PiperOrigin-RevId: 242614380
The existing implementation of the ExecutionEngine unconditionally runs a list
of "default" MLIR passes on the module upon creation. These passes include,
among others, dialect conversions from affine to standard and from standard to
LLVM IR dialects. In some cases, these conversions might have been performed
before ExecutionEngine is created. More advanced use cases may be performing
additional transformations that the "default" passes will conflict with.
Provide an overload for ExecutionEngine::create that takes a PassManager
configured with the passes to run on the module. If it is not provided, do not
run any passes. The engine will not be created if the input module, after the
pass manager, has any other dialect than the LLVM IR dialect.
--
PiperOrigin-RevId: 242127393
There are two places containing constant folding logic right now: the ConstantFold
pass and the GreedyPatternRewriteDriver. The logic was not shared and started to
drift apart. We were testing constant folding logic using the ConstantFold pass,
but lagged behind the GreedyPatternRewriteDriver, where we really want the constant
folding to happen.
This CL pulled the logic into utility functions and classes for sharing between
these two places. A new ConstantFoldHelper class is created to help constant fold
and de-duplication.
Also, renamed the ConstantFold pass to TestConstantFold to make it clear that it is
intended for testing purpose.
--
PiperOrigin-RevId: 241971681
These fail with:
could not convert ‘module’ from ‘llvm::orc::ThreadSafeModule’ to
‘llvm::Expected<llvm::orc::ThreadSafeModule>’
PiperOrigin-RevId: 240892583
Original implementation of OutUtils provided two different LLVM IR module
transformers to be used with the MLIR ExecutionEngine: OptimizingTransformer
parameterized by the optimization levels (similar to -O3 flags) and
LLVMPassesTransformer parameterized by the string formatted similarly to
command line options of LLVM's "opt" tool without support for -O* flags.
Introduce such support by declaring the flags inside the parser and by
populating the pass managers similarly to what "opt" does. Remove the
additional flags from mlir-cpu-runner as they can now be wrapped into
`-llvm-opts` together with other LLVM-related flags.
PiperOrigin-RevId: 236107292
A recent change introduced a possibility to run LLVM IR transformation during
JIT-compilation in the ExecutionEngine. Provide helper functions that
construct IR transformers given either clang-style optimization levels or a
list passes to run. The latter wraps the LLVM command line option parser to
parse strings rather than actual command line arguments. As a result, we can
run either of
mlir-cpu-runner -O3 input.mlir
mlir-cpu-runner -some-mlir-pass -llvm-opts="-llvm-pass -other-llvm-pass"
to combine different transformations. The transformer builder functions are
provided as a separate library that depends on LLVM pass libraries unlike the
main execution engine library. The library can be used for integrating MLIR
execution engine into external frameworks.
PiperOrigin-RevId: 234173493
Original implementation of the translation from MLIR to LLVM IR operated on the
Standard+BuiltIn dialect, with a later addition of the SuperVector dialect.
This required the translation to be aware of a potetially large number of other
dialects as the infrastructure extended. With the recent introduction of the
LLVM IR dialect into MLIR, the translation can be switched to only translate
the LLVM IR dialect, and the translation of the operations becomes largely
mechanical.
The reimplementation of the translator follows the lines of the original
translator in function and basic block conversion. In particular, block
arguments are converted to LLVM IR PHI nodes, which are connected to their
sources after all blocks of a function had been converted. Thanks to LLVM IR
types being wrapped in the MLIR LLVM dialect type, type conversion is
simplified to only convert function types, all other types are simply
unwrapped. Individual instructions are constructed using the LLVM IRBuilder,
which has a great potential for being table-generated from the LLVM IR dialect
operation definitions.
The input of the test/Target/llvmir.mlir is updated to use the MLIR LLVM IR
dialect. While it is now redundant with the dialect conversion test, the point
of the exercise is to guarantee exactly the same LLVM IR is emitted. (Only the
name of the allocation function is changed from `__mlir_alloc` to `alloc` in
the CHECK lines.) It will be simplified in a follow-up commit.
PiperOrigin-RevId: 233842306
Make sure the module is always passed to the optimization layer.
Drop unused default argument for the IR transformation and remove the function
that was only used in this default argument. The transformation wrapper
constructor already checks for the null function, so the caller can just pass
`{}` if they don't want any transformation (no callers currently need this).
PiperOrigin-RevId: 233068817
The current ExecutionEngine flow generates the LLVM IR from MLIR and
JIT-compiles it as is without any transformation. It thus misses the
opportunity to perform optimizations supported by LLVM or collect statistics
about the module. Modify the Orc JITter to perform transformations on the LLVM
IR. Accept an optional LLVM module transformation function when constructing
the ExecutionEngine and use it while JIT-compiling. This prevents MLIR
ExecutionEngine from depending on LLVM passes; its clients should depend on the
passes they require.
PiperOrigin-RevId: 232877060
This CL adds support for calling EDSCs from other languages than C++.
Following the LLVM convention this CL:
1. declares simple opaque types and a C API in mlir-c/Core.h;
2. defines the implementation directly in lib/EDSC/Types.cpp and
lib/EDSC/MLIREmitter.cpp.
Unlike LLVM however the nomenclature for these types and API functions is not
well-defined, naming suggestions are most welcome.
To avoid the need for conversion functions, Types.h and MLIREmitter.h include
mlir-c/Core.h and provide constructors and conversion operators between the
mlir::edsc type and the corresponding C type.
In this first commit, mlir-c/Core.h only contains the types for the C API
to allow EDSCs to work from Python. This includes both a minimal set of core
MLIR
types (mlir_context_t, mlir_type_t, mlir_func_t) as well as the EDSC types
(edsc_mlir_emitter_t, edsc_expr_t, edsc_stmt_t, edsc_indexed_t). This can be
restructured in the future as concrete needs arise.
For now, the API only supports:
1. scalar types;
2. memrefs of scalar types with static or symbolic shapes;
3. functions with input and output of these types.
The C API is not complete wrt ownership semantics. This is in large part due
to the fact that python bindings are written with Pybind11 which allows very
idiomatic C++ bindings. An effort is made to write a large chunk of these
bindings using the C API but some C++isms are used where the design benefits
from this simplication. A fully isolated C API will make more sense once we
also integrate with another language like Swift and have enough use cases to
drive the design.
Lastly, this CL also fixes a bug in mlir::ExecutionEngine were the order of
declaration of llvmContext and the JIT result in an improper order of
destructors (which used to crash before the fix).
PiperOrigin-RevId: 231290250
This implements a simple CPU runner based on LLVM Orc JIT. The base
functionality is provided by the ExecutionEngine class that compiles and links
the module, and provides an interface for obtaining function pointers to the
JIT-compiled MLIR functions and for invoking those functions directly. Since
function pointers need to be casted to the correct pointer type, the
ExecutionEngine wraps LLVM IR functions obtained from MLIR into a helper
function with the common signature `void (void **)` where the single argument
is interpreted as a list of pointers to the actual arguments passed to the
function, eventually followed by a pointer to the result of the function.
Additionally, the ExecutionEngine is set up to resolve library functions to
those available in the current process, enabling support for, e.g., simple C
library calls.
For integration purposes, this also provides a simplistic runtime for memref
descriptors as expected by the LLVM IR code produced by MLIR translation. In
particular, memrefs are transformed into LLVM structs (can be mapped to C
structs) with a pointer to the data, followed by dynamic sizes. This
implementation only supports statically-shaped memrefs of type float, but can
be extened if necessary.
Provide a binary for the runner and a test that exercises it.
PiperOrigin-RevId: 230876363