Commit Graph

14 Commits

Author SHA1 Message Date
Mircea Trofin 5ce4c9aa04 [mlgo] Use TFLite for 'development' mode.
TLite is a lightweight, statically linkable[1], model evaluator, supporting a
subset of what the full tensorflow library does, sufficient for the
types of scenarios we envision having. It is also faster.

We still use saved models as "source of truth" - 'release' mode's AOT
starts from a saved model; and the ML training side operates in terms of
saved models.

Using TFLite solves the following problems compared to using the full TF
C API:

- a compiler-friendly implementation for runtime-loadable (as opposed
  to AOT-embedded) models: it's statically linked; it can be built via
  cmake;
- solves an issue we had when building the compiler with both AOT and
  full TF C API support, whereby, due to a packaging issue on the TF
  side, we needed to have the pip package and the TF C API library at
  the same version. We have no such constraints now.

The main liability is it supporting a subset of what the full TF
framework does. We do not expect that to cause an issue, but should that
be the case, we can always revert back to using the full framework
(after also figuring out a way to address the problems that motivated
the move to TFLite).

Details:

This change switches the development mode to TFLite. Models are still
expected to be placed in a directory - i.e. the parameters to clang
don't change; what changes is the directory content: we still need
an `output_spec.json` file; but instead of the saved_model protobuf and
the `variables` directory, we now just have one file, `model.tflite`.

The change includes a utility showing how to take a saved model and
convert it to TFLite, which it uses for testing.

The full TF implementation can still be built (not side-by-side). We
intend to remove it shortly, after patching downstream dependencies. The
build behavior, however, prioritizes TFLite - i.e. trying to enable both
full TF C API and TFLite will just pick TFLite.

[1] thanks to @petrhosek's changes to TFLite's cmake support and its deps!
2022-08-24 16:07:24 -07:00
Mircea Trofin 24c6c35270 [mlgo] Don't provide default model URLs
Pointed out in Issue #56432: the current reference models may not be
quite friendly to open source projects. Their purpose is only
illustrative - the expectation is that projects would train their own.
To avoid unintentionally pulling such a model, made the URL cmake
setting require explicit user setting.

Differential Revision: https://reviews.llvm.org/D129342
2022-07-11 07:37:14 -07:00
Matthias Braun 53753531bc TensorFlowCompile: Add object file to list of sources rather than LINK_LIBS
Differential Revision: https://reviews.llvm.org/D126736
2022-06-01 09:04:48 -07:00
Matthias Braun e267df8ce8 Use cmake Python3_EXECUTABLE variable instead of hardcoding 2022-05-26 14:48:45 -07:00
Mircea Trofin f29256a64a [MLGO] Improved support for AOT cross-targeting scenarios
The tensorflow AOT compiler can cross-target, but it can't run on (for
example) arm64. We added earlier support where the AOT-ed header and object
would be built on a separate builder and then passed at build time to
a build host where the AOT compiler can't run, but clang can be otherwise
built.

To simplify such scenarios given we now support more than one AOT-able
case (regalloc and inliner), we make the AOT scenario centered on whether
files are generated, case by case (this includes the "passed from a
different builder" scenario).
This means we shouldn't need an 'umbrella' LLVM_HAVE_TF_AOT, in favor of
case by case control. A builder can opt out of an AOT case by passing that case's
model path as `none`. Note that the overrides still take precedence.

This patch controls conditional compilation with case-specific flags,
which can be enabled locally, for the component where those are
available. We still keep an overall flag for some tests.

The 'development/training' mode is unchanged, because there the model is
passed from the command line and interpreted.

Differential Revision: https://reviews.llvm.org/D117752
2022-01-20 07:05:39 -08:00
Mircea Trofin edf8e3ea5e [NFC][mlgo]Make the test model generator inlining-specific
When looking at building the generator for regalloc, we realized we'd
need quite a bit of custom logic, and that perhaps it'd be easier to
just have each usecase (each kind of mlgo policy) have it's own
stand-alone test generator.

This patch just consolidates the old `config.py` and
`generate_mock_model.py` into one file, and does away with
subdirectories under Analysis/models.
2021-12-22 13:38:45 -08:00
Mircea Trofin f99a8bcde8 [NFC][mlgo]Rename a variable in TensorFlowCompile.cmake
Remaining var that had 'inlining' in name, despite being general-purpose
2021-12-22 12:55:30 -08:00
Mircea Trofin 329b0181c3 [NFC][mlgo] Rename some TensorFlowCompile internal vars
They were referring to 'inlining' albeit being generic
2021-12-20 09:26:01 -08:00
Jacob Hegna 96f15aa5bb Fail gracefully if no inlining model is available to download.
Differential Revision: https://reviews.llvm.org/D104829
2021-07-01 04:04:11 +00:00
Jacob Hegna f86d1f99b3 Remove ML inlining model artifacts.
They are not conducive to being stored in git. Instead, we autogenerate
mock model artifacts for use in tests. Production models can be
specified with the cmake flag LLVM_INLINER_MODEL_PATH.

LLVM_INLINER_MODEL_PATH has two sentinel values:
 - download, which will download the most recent compatible model.
 - autogenerate, which will autogenerate a "fake" model for testing the
 model uptake infrastructure.

Differential Revision: https://reviews.llvm.org/D104251
2021-06-21 17:38:09 +00:00
Jacob Hegna 6c848c28c2 Remove redundant environment variable XLA_FLAGS.
If the flag is not set, the script saved_model_aot_compile.py in tensorflow will
default it to the correct value. However, in TF 2.5, the way the value is set in
TensorFlowCompile.cmake file triggers a build error.

Reviewed By: mtrofin

Differential Revision: https://reviews.llvm.org/D103972
2021-06-14 23:58:22 +00:00
Mircea Trofin f34ef248d3 [mlgo] Skip AOT-compiling a model if a header/object pair is provided
This allows one to cross-compile the header/object for a model in a
setup where the compiler is built on a system that cannot host the AOT
compiler. For example, if arm-hostable clang is desired, while the AOT
Tensorflow compiler can cross-compile to arm, it can't currently run on
arm.

The only alternative in that scenario would be to cross-compile clang
itself, but that gets complicated when trying to run tests after that.

Differential Revision: https://reviews.llvm.org/D99992
2021-04-13 09:46:29 -07:00
Mircea Trofin 33481c9997 [mlgo] Fetch models from path / URL
Allow custom location for pre-trained models used when AOT-compiling
policies.

Differential Revision: https://reviews.llvm.org/D96796
2021-02-16 22:47:14 -08:00
Mircea Trofin bdceefe95b [llvm] Release-mode ML InlineAdvisor
Summary:
This implementation uses a pre-trained model which is statically
compiled into a native function.

RFC: http://lists.llvm.org/pipermail/llvm-dev/2020-April/140763.html

Reviewers: davidxl, jdoerfert, dblaikie

Subscribers: mgorny, eraman, hiraditya, arphaman, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D81515
2020-06-24 08:18:42 -07:00