* Only leaf packages are non-namespace packages. This allows most of the top levels to be split into different directories or deployment packages. In the previous state, the presence of __init__.py files at each level meant that the entire tree could only ever exist in one physical directory on the path.
* This changes the API usage slightly: `import mlir` will no longer do a deep import of `mlir.ir`, etc. This may necessitate some client code changes.
* Dialect gen code was restructured so that the user is responsible for providing the `my_dialect.py` file, which then must import its peer `_my_dialect_ops_gen`. This gives complete control of the dialect namespace to the user instead of to tablegen code, allowing further dialect-specific python APIs.
* Correspondingly, the previous extension modules `_my_dialect.py` are now `_my_dialect_ops_ext.py`.
* Now that the `linalg` namespace is open, moved the `linalg_opdsl` tool into it.
* This may require some corresponding downstream adjustments to npcomp, circt, et al:
* Probably some shallow imports need to be converted to deep imports (i.e. not `import mlir` brings in the world).
* Each tablegen generated dialect now needs an explicit `foo.py` which does a `from ._foo_ops_gen import *`. This is similar to the way that generated code operates in the C++ world.
* If providing dialect op extensions, those need to be moved from `_foo.py` -> `_foo_ops_ext.py`.
Differential Revision: https://reviews.llvm.org/D98096
* We've got significant missing features in order to use most of these effectively (i.e. custom builders, region-based builders).
* We presently also lack a mechanism for actually registering these dialects but they can be use with contexts that allow unregistered dialects for further prototyping.
Differential Revision: https://reviews.llvm.org/D94368
In ODS, attributes of an operation can be provided as a part of the "arguments"
field, together with operands. Such attributes are accepted by the op builder
and have accessors generated.
Implement similar functionality for ODS-generated op-specific Python bindings:
the `__init__` method now accepts arguments together with operands, in the same
order as in the ODS `arguments` field; the instance properties are introduced
to OpView classes to access the attributes.
This initial implementation accepts and returns instances of the corresponding
attribute class, and not the underlying values since the mapping scheme of the
value types between C++, C and Python is not yet clear. Default-valued
attributes are not supported as that would require Python to be able to parse
C++ literals.
Since attributes in ODS are tightely related to the actual C++ type system,
provide a separate Tablegen file with the mapping between ODS storage type for
attributes (typically, the underlying C++ attribute class), and the
corresponding class name. So far, this might look unnecessary since all names
match exactly, but this is not necessarily the cases for non-standard,
out-of-tree attributes, which may also be placed in non-default namespaces or
Python modules. This also allows out-of-tree users to generate Python bindings
without having to modify the bindings generator itself. Storage type was
preferred over the Tablegen "def" of the attribute class because ODS
essentially encodes attribute _constraints_ rather than classes, e.g. there may
be many Tablegen "def"s in the ODS that correspond to the same attribute type
with additional constraints
The presence of the explicit mapping requires the change in the .td file
structure: instead of just calling the bindings generator directly on the main
ODS file of the dialect, it becomes necessary to create a new file that
includes the main ODS file of the dialect and provides the mapping for
attribute types. Arguably, this approach offers better separability of the
Python bindings in the build system as the main dialect no longer needs to know
that it is being processed by the bindings generator.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D91542
Introduce an ODS/Tablegen backend producing Op wrappers for Python bindings
based on the ODS operation definition. Usage:
mlir-tblgen -gen-python-op-bindings -Iinclude <path/to/Ops.td> \
-bind-dialect=<dialect-name>
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D90960
We were discussing on discord regarding the need for extension-based systems like Python to dynamically link against MLIR (or else you can only have one extension that depends on it). Currently, when I set that up, I piggy-backed off of the flag that enables build libLLVM.so and libMLIR.so and depended on libMLIR.so from the python extension if shared library building was enabled. However, this is less than ideal.
In the current setup, libMLIR.so exports both all symbols from the C++ API and the C-API. The former is a kitchen sink and the latter is curated. We should be splitting them and for things that are properly factored to depend on the C-API, they should have the option to *only* depend on the C-API, and we should build that shared library no matter what. Its presence isn't just an optimization: it is a key part of the system.
To do this right, I needed to:
* Introduce visibility macros into mlir-c/Support.h. These should work on both *nix and windows as-is.
* Create a new libMLIRPublicAPI.so with just the mlir-c object files.
* Compile the C-API with -fvisibility=hidden.
* Conditionally depend on the libMLIR.so from libMLIRPublicAPI.so if building libMLIR.so (otherwise, also links against the static libs and will produce a mondo libMLIRPublicAPI.so).
* Disable re-exporting of static library symbols that come in as transitive deps.
This gives us a dynamic linked C-API layer that is minimal and should work as-is on all platforms. Since we don't support libMLIR.so building on Windows yet (and it is not very DLL friendly), this will fall back to a mondo build of libMLIRPublicAPI.so, which has its uses (it is also the most size conscious way to go if you happen to know exactly what you need).
Sizes (release/stripped, Ubuntu 20.04):
Shared library build:
libMLIRPublicAPI.so: 121Kb
_mlir.cpython-38-x86_64-linux-gnu.so: 1.4Mb
mlir-capi-ir-test: 135Kb
libMLIR.so: 21Mb
Static build:
libMLIRPublicAPI.so: 5.5Mb (since this is a "static" build, this includes the MLIR implementation as non-exported code).
_mlir.cpython-38-x86_64-linux-gnu.so: 1.4Mb
mlir-capi-ir-test: 44Kb
Things like npcomp and circt which bring their own dialects/transforms/etc would still need the shared library build and code that links against libMLIR.so (since it is all C++ interop stuff), but hopefully things that only depend on the public C-API can just have the one narrow dep.
I spot checked everything with nm, and it looks good in terms of what is exporting/importing from each layer.
I'm not in a hurry to land this, but if it is controversial, I'll probably split off the Support.h and API visibility macro changes, since we should set that pattern regardless.
Reviewed By: mehdi_amini, benvanik
Differential Revision: https://reviews.llvm.org/D90824
The CMake macro refactoring had a hardcoded value left instead of using
the function argument.
Didn't catch it locally before because it required a clean build to
trigger.