Previously JITCompileCallbackManager only supported single threaded code. This
patch embeds a VSO (see include/llvm/ExecutionEngine/Orc/Core.h) in the callback
manager. The VSO ensures that the compile callback is only executed once and that
the resulting address cached for use by subsequent re-entries.
llvm-svn: 333490
Currently RTDyldObjectLinkingLayer makes it hard to support
JITEventListeners. Which in turn means debugging and profiling JIT
generated code hard.
Supporting JITEventListeners at minimum requries a freed
callback (added).
As listeners expect the ObjectFile to be passed as well, an adaptor
between RTDyldObjectLinkingLayer and JITEventListeners would currently
need to also maintain ObjectFiles for all loaded modules. To make that
less awkward, extend the callbacks to pass the ObjectFile to both
Finalized and Freed callbacks. That requires extending the lifetime
of the object file when callbacks are present.
Reviewed By: lhames
Differential Revision: https://reviews.llvm.org/D44890
llvm-svn: 333227
Re-appply r333147, reverted in r333152 due to a pre-existing bug. As
D47308 has been merged in r333206, the OSX issue should now be
resolved.
In many cases JIT users will know in which module a symbol
resides. Avoiding to search other modules can be more efficient. It
also allows to handle duplicate symbol names between modules.
Reviewed By: lhames
Differential Revision: https://reviews.llvm.org/D44889
llvm-svn: 333215
The lack of name mangling caused a unittest failure after r333147 (in
TestEagerIRCompilation), as OSX prefixes symbol names with '_'. The
lack of name mangling therefore leads to a NULL pointer being returned
and then called, hence the failure.
While it may look like it, this isn't an actual behavioral change, as
findSymbolIn() previously was not exposed externally, and essentially
dead code. Which explains why nobody noticed the issue previously.
Reviewers: lhames
Reviewed By: lhames
Subscribers: chandlerc, llvm-commits
Differential Revision: https://reviews.llvm.org/D47308
llvm-svn: 333206
This reverts r333147 until https://reviews.llvm.org/D47308 is ready to
be reviewed. r333147 exposed a behavioural difference between
OrcCBindingsStack::findSymbolIn() and OrcCBindingsStack::findSymbol(),
where only the latter does name mangling. After r333147 that causes a
test failure on OSX, because the new test looks for main using
findSymbolIn() but the mangled name is _main.
llvm-svn: 333152
In many cases JIT users will know in which module a symbol
resides. Avoiding to search other modules can be more efficient. It
also allows to handle duplicate symbol names between modules.
Reviewed By: lhames
Differential Revision: https://reviews.llvm.org/D44889
llvm-svn: 333147
to a base class (IRMaterializationUnit).
The new class, IRMaterializationUnit, provides a convenient base for any client
that wants to write a materializer for LLVM IR.
llvm-svn: 332993
Also tightens the behavior of ExecutionSession::failQuery. Queries can usually
only be failed by marking a symbol as failed-to-materialize, but
ExecutionSession::failQuery provides a second route, and both routes may be
executed from different threads. In the case that a query has already been
failed due to a materialization error, ExecutionSession::failQuery will
direct the error to ExecutionSession::reportError instead.
llvm-svn: 332898
The lookup function provides blocking symbol resolution for JIT clients (not
layers themselves) so it does not need to track symbol dependencies via a
MaterializationResponsibility.
llvm-svn: 332897
notifyFailed method rather than passing in an error generator.
VSO::notifyFailed is responsible for notifying queries that they will not
succeed due to error. In practice the queries don't care about the details
of the failure, just the fact that a failure occurred for some symbols.
Having VSO::notifyFailed take care of this simplifies the interface.
llvm-svn: 332666
VSOs now track dependencies for materializing symbols. Each symbol must have its
dependencies registered with the VSO prior to finalization. Usually this will
involve registering the dependencies returned in
AsynchronousSymbolQuery::ResolutionResults for queries made while linking the
symbols being materialized.
Queries against symbols are notified that a symbol is ready once it and all of
its transitive dependencies are finalized, allowing compilation work to be
broken up and moved between threads without queries returning until their
symbols fully safe to access / execute.
Related utilities (VSO, MaterializationUnit, MaterializationResponsibility) are
updated to support dependence tracking and more explicitly track responsibility
for symbols from the point of definition until they are finalized.
llvm-svn: 332541
This is a follow-up to r331272.
We've been running doxygen with the autobrief option for a couple of
years now. This makes the \brief markers into our comments
redundant. Since they are a visual distraction and we don't want to
encourage more \brief markers in new code either, this patch removes
them all.
Patch produced by
for i in $(git grep -l '\@brief'); do perl -pi -e 's/\@brief //g' $i & done
https://reviews.llvm.org/D46290
llvm-svn: 331275
See r331124 for how I made a list of files missing the include.
I then ran this Python script:
for f in open('filelist.txt'):
f = f.strip()
fl = open(f).readlines()
found = False
for i in xrange(len(fl)):
p = '#include "llvm/'
if not fl[i].startswith(p):
continue
if fl[i][len(p):] > 'Config':
fl.insert(i, '#include "llvm/Config/llvm-config.h"\n')
found = True
break
if not found:
print 'not found', f
else:
open(f, 'w').write(''.join(fl))
and then looked through everything with `svn diff | diffstat -l | xargs -n 1000 gvim -p`
and tried to fix include ordering and whatnot.
No intended behavior change.
llvm-svn: 331184
This forces these operations to be carried out via a
MaterializationResponsibility instance, ensuring responsibility is explicitly
tracked.
llvm-svn: 330356
materializing function definitions.
MaterializationUnit instances are responsible for resolving and finalizing
symbol definitions when their materialize method is called. By contract, the
MaterializationUnit must materialize all definitions it is responsible for and
no others. If it can not materialize all definitions (because of some error)
then it must notify the associated VSO about each definition that could not be
materialized. The MaterializationResponsibility class tracks this
responsibility, asserting that all required symbols are resolved and finalized,
and that no extraneous symbols are resolved or finalized. In the event of an
error it provides a convenience method for notifying the VSO about each
definition that could not be materialized.
llvm-svn: 330142
notifyMaterializationFailed.
The notifyMaterializationFailed method can determine which error to raise by
looking at which queue the pending queries are in (resolution or finalization).
llvm-svn: 330141
Previously this crashed because a nullptr (returned by
createLocalIndirectStubsManagerBuilder() on platforms without
indirection support) functor was unconditionally invoked.
Patch by Andres Freund. Thanks Andres!
llvm-svn: 328687
operation all-or-nothing, rather than allowing materialization on a per-symbol
basis.
This addresses a shortcoming of per-symbol materialization: If a
MaterializationUnit (/SymbolSource) wants to materialize more symbols than
requested (which is likely: most materializers will want to materialize whole
modules) then it needs a way to notify the symbol table about the extra symbols
being materialized. This process (checking what has been requested against what
is being provided and notifying the symbol table about the difference) has to
be repeated at every level of the JIT stack. Making materialization
all-or-nothing eliminates this issue, simplifying both materializer
implementations and the symbol table (VSO class) API. The cost is that
per-symbol materialization (e.g. for individual symbols in a module) now
requires multiple MaterializationUnits.
llvm-svn: 327946
This reverts commit r327566, it breaks
test/ExecutionEngine/OrcMCJIT/test-global-ctors.ll.
The test doesn't crash with a stack trace, unfortunately. It merely
returns 1 as the exit code.
ASan didn't produce a report, and I reproduced this on my Linux machine
and Windows box.
llvm-svn: 327576
Layer implementations typically mutate module state, and this is better
reflected by having layers own the Module they are operating on.
llvm-svn: 327566
The Error locals need to be protected by a mutex. (This could be fixed by
having the promises / futures contain Expected and Error values, but
MSVC's future implementation does not support this yet).
Hopefully this will fix some of the errors seen on the builders due to
r327474.
llvm-svn: 327477
The lookup function takes a list of VSOs, a set of symbol names (or just one
symbol name) and a materialization function object. It returns an
Expected<SymbolMap> (if given a set of names) or an Expected<JITEvaluatedSymbol>
(if given just one name). The lookup method constructs an
AsynchronousSymbolQuery for the given names, applies that query to each VSO in
the list in turn, and then blocks waiting for the query to complete. If
threading is enabled then the materialization function object can be used to
execute the materialization on different threads. If threading is disabled the
MaterializeOnCurrentThread utility must be used.
llvm-svn: 327474
than a shared ObjectFile/MemoryBuffer pair.
There's no need to pre-parse the buffer into an ObjectFile before passing it
down to the linking layer, and moving the parsing into the linking layer allows
us remove the parsing code at each call site.
llvm-svn: 325725
Handles were returned by addModule and used as keys for removeModule,
findSymbolIn, and emitAndFinalize. Their job is now subsumed by VModuleKeys,
which simplify resource management by providing a consistent handle across all
layers.
llvm-svn: 324700
In particular this patch switches RTDyldObjectLinkingLayer to use
orc::SymbolResolver and threads the requried changse (ExecutionSession
references and VModuleKeys) through the existing layer APIs.
The purpose of the new resolver interface is to improve query performance and
better support parallelism, both in JIT'd code and within the compiler itself.
The most visibile change is switch of the <Layer>::addModule signatures from:
Expected<Handle> addModule(std::shared_ptr<ModuleType> Mod,
std::shared_ptr<JITSymbolResolver> Resolver)
to:
Expected<Handle> addModule(VModuleKey K, std::shared_ptr<ModuleType> Mod);
Typical usage of addModule will now look like:
auto K = ES.allocateVModuleKey();
Resolvers[K] = createSymbolResolver(...);
Layer.addModule(K, std::move(Mod));
See the BuildingAJIT tutorial code for example usage.
llvm-svn: 324405
This resolver conforms to the LegacyJITSymbolResolver interface, and will be
replaced with a null-returning resolver conforming to the newer
orc::SymbolResolver interface in the near future. This patch renames the class
to avoid a clash.
llvm-svn: 324175
first argument.
This makes lookupFlags more consistent with lookup (which takes the query as the
first argument) and composes better in practice, since lookups are usually
linearly chained: Each lookupFlags can populate the result map based on the
symbols not found in the previous lookup. (If the maps were returned rather than
passed by reference there would have to be a merge step at the end).
llvm-svn: 323398
orc::SymbolResolver to JITSymbolResolver adapter.
The new orc::SymbolResolver interface uses asynchronous queries for better
performance. (Asynchronous queries with bulk lookup minimize RPC/IPC overhead,
support parallel incoming queries, and expose more available work for
distribution). Existing ORC layers will soon be updated to use the
orc::SymbolResolver API rather than the legacy llvm::JITSymbolResolver API.
Because RuntimeDyld still uses JITSymbolResolver, this patch also includes an
adapter that wraps an orc::SymbolResolver with a JITSymbolResolver API.
llvm-svn: 323073
lookupFlags returns a SymbolFlagsMap for the requested symbols, along with a
set containing the SymbolStringPtr for any symbol not found in the VSO.
The JITSymbolFlags for each symbol will have been stripped of its transient
JIT-state flags (i.e. NotMaterialized, Materializing).
Calling lookupFlags does not trigger symbol materialization.
llvm-svn: 323060
ExternalSymbolMap now stores the string key (rather than using a StringRef),
as the object file backing the key may be removed at any time.
llvm-svn: 323001
Bulk queries reduce IPC/RPC overhead for cross-process JITing and expose
opportunities for parallel compilation.
The two new query methods are lookupFlags, which finds the flags for each of a
set of symbols; and lookup, which finds the address and flags for each of a
set of symbols. (See doxygen comments for more details.)
The existing JITSymbolResolver class is renamed LegacyJITSymbolResolver, and
modified to extend the new JITSymbolResolver class using the following scheme:
- lookupFlags is implemented by calling findSymbolInLogicalDylib for each of the
symbols, then returning the result of calling getFlags() on each of these
symbols. (Importantly: lookupFlags does NOT call getAddress on the returned
symbols, so lookupFlags will never trigger materialization, and lookupFlags will
never call findSymbol, so only symbols that are part of the logical dylib will
return results.)
- lookup is implemented by calling findSymbolInLogicalDylib for each symbol and
falling back to findSymbol if findSymbolInLogicalDylib returns a null result.
Assuming a symbol is found its getAddress method is called to materialize it and
the result (if getAddress succeeds) is stored in the result map, or the error
(if getAddress fails) is returned immediately from lookup. If any symbol is not
found then lookup returns immediately with an error.
This change will break any out-of-tree derivatives of JITSymbolResolver. This
can be fixed by updating those classes to derive from LegacyJITSymbolResolver
instead.
llvm-svn: 322913
ExecutionSession will represent a running JIT program.
VModuleKey is a unique key assigned to each module added as part of
an ExecutionSession. The Layer concept will be updated in future to
require a VModuleKey when a module is added.
llvm-svn: 322336
version being used on some of the green dragon builders (plus a clang-format).
Workaround: AsynchronousSymbolQuery and VSO want to work with
JITEvaluatedSymbols anyway, so just use them (instead of JITSymbol, which
happens to tickle the bug).
The libcxx bug being worked around was fixed in r276003, and there are plans to
update the offending builders.
llvm-svn: 322140
The original commit broke the builders due to a think-o in an assertion:
AsynchronousSymbolQuery's constructor needs to check the callback member
variables, not the constructor arguments.
llvm-svn: 321853
SymbolSource.
These new APIs are a first stab at tackling some current shortcomings of ORC,
especially in performance and threading support.
VSO (Virtual Shared Object) is a symbol table representing the symbol
definitions of a set of modules that behave as if they had been statically
linked together into a shared object or dylib. Symbol definitions, either
pre-defined addresses or lazy definitions, can be added and queries for symbol
addresses made. The table applies the same linkage strength rules that static
linkers do when constructing a dylib or shared object: duplicate definitions
result in errors, strong definitions override weak or common ones. This class
should improve symbol lookup speed by providing centralized symbol tables (as
compared to the findSymbol implementation in the in-tree ORC layers, which
maintain one symbol table per object file / module added).
AsynchronousSymbolQuery is a query for the addresses of a set of symbols.
Query results are returned via a callback once they become available. Querying
for a set of symbols, rather than one symbol at a time (as the current lookup
scheme does) the JIT has the opportunity to make better use of available
resources (e.g. by spawning multiple jobs to materialize the requested symbols
if possible). Returning results via a callback makes queries asynchronous, so
queries from multiple threads of JIT'd code can proceed simultaneously.
SymbolSource represents a source of symbol definitions. It is used when
adding lazy symbol definitions to a VSO. Symbol definitions can be materialized
when needed or discarded if a stronger definition is found. Materializing on
demand via SymbolSources should (eventually) allow us to remove the lazy
materializers from JITSymbol, which will in turn allow the removal of many
current error checks and reduce the number of RPC round-trips involved in
materializing remote symbols. Adding a discard function allows sources to
discard symbol definitions (or mark them as available_externally), reducing the
amount of redundant code generated by the JIT for ODR symbols.
llvm-svn: 321838
This patch introduces RemoteObjectClientLayer and RemoteObjectServerLayer,
which can be used to forward ORC object-layer operations from a JIT stack in
the client to a JIT stack (consisting only of object-layers) in the server.
This is a new way to support remote-JITing in LLVM. The previous approach
(supported by OrcRemoteTargetClient and OrcRemoteTargetServer) used a
remote-mapping memory manager that sat "beneath" the JIT stack and sent
fully-relocated binary blobs to the server. The main advantage of the new
approach is that relocatable objects can be cached on the server and re-used
(if the code that they represent hasn't changed), whereas fully-relocated blobs
can not (since the addresses they have been permanently bound to will change
from run to run).
llvm-svn: 312511
Calling grow may result in an error if, for example, this is a callback
manager for a remote target. We need to be able to return this error to the
callee.
llvm-svn: 312429
https://reviews.llvm.org/D36888
From that review description:
When an OrcMCJITReplacement object gets destructed, LazyEmitLayer may still
contain a shared_ptr of a module, which requires ShouldDelete in the deleter.
But ShouldDelete gets destructed before LazyEmitLayer due to the order of
declaration in OrcMCJITReplacement, which leads to a crash, when the destructor
of LazyEmitLayer is executed. Changing the order of declaration fixes this.
Patch by Moritz Kroll. Thanks Moritz!
llvm-svn: 312086
This patch updates the ORC layers and utilities to return and propagate
llvm::Errors where appropriate. This is necessary to allow ORC to safely handle
error cases in cross-process and remote JITing.
llvm-svn: 307350
symbol resolver argument.
De-templatizing the symbol resolver is part of the ongoing simplification of
ORC layer API.
Removing the memory management argument (and delegating construction of memory
managers for RTDyldObjectLinkingLayer to a functor passed in to the constructor)
allows us to build JITs whose base object layers need not be compatible with
RTDyldObjectLinkingLayer's memory mangement scheme. For example, a 'remote
object layer' that sends fully relocatable objects directly to the remote does
not need a memory management scheme at all (that will be handled by the remote).
llvm-svn: 307058
I think there are some destruction ordering issues here. The
ShouldDelete map seems to be getting destroyed before the shared_ptr
deleter lambda accesses it. In any case, this avoids inserting elements
into the map during shutdown.
llvm-svn: 306736
Revert "[ORC] Remove redundant semicolons from DEFINE_SIMPLE_CONVERSION_FUNCTIONS uses."
Revert "[ORC] Move ORC IR layer interface from addModuleSet to addModule and fix the module type as std::shared_ptr<Module>."
They broke ExecutionEngine/OrcMCJIT/test-global-ctors.ll on linux.
llvm-svn: 306176
move the ObjectCache from the IRCompileLayer to SimpleCompiler.
This is the first in a series of patches aimed at cleaning up and improving the
robustness and performance of the ORC APIs.
llvm-svn: 306058
I did this a long time ago with a janky python script, but now
clang-format has built-in support for this. I fed clang-format every
line with a #include and let it re-sort things according to the precise
LLVM rules for include ordering baked into clang-format these days.
I've reverted a number of files where the results of sorting includes
isn't healthy. Either places where we have legacy code relying on
particular include ordering (where possible, I'll fix these separately)
or where we have particular formatting around #include lines that
I didn't want to disturb in this patch.
This patch is *entirely* mechanical. If you get merge conflicts or
anything, just ignore the changes in this patch and run clang-format
over your #include lines in the files.
Sorry for any noise here, but it is important to keep these things
stable. I was seeing an increasing number of patches with irrelevant
re-ordering of #include lines because clang-format was used. This patch
at least isolates that churn, makes it easy to skip when resolving
conflicts, and gets us to a clean baseline (again).
llvm-svn: 304787
frames.
RuntimeDyld was previously responsible for tracking allocated EH frames, but it
makes more sense to have the RuntimeDyld::MemoryManager track them (since the
frames are allocated through the memory manager, and written to memory owned by
the memory manager). This patch moves the frame tracking into
RTDyldMemoryManager, and changes the deregisterFrames method on
RuntimeDyld::MemoryManager from:
void deregisterEHFrames(uint8_t *Addr, uint64_t LoadAddr, size_t Size);
to:
void deregisterEHFrames();
Separating this responsibility will allow ORC to continue to throw the
RuntimeDyld instances away post-link (saving a few dozen bytes per lazy
function) while properly deregistering frames when modules are unloaded.
This patch also updates ORC to call deregisterEHFrames when modules are
unloaded. This fixes a bug where an exception that tears down the JIT can then
unwind through dangling EH frames that have been deallocated but not
deregistered, resulting in UB.
For people using SectionMemoryManager this should be pretty much a no-op. For
people with custom allocators that override registerEHFrames/deregisterEHFrames,
you will now be responsible for tracking allocated EH frames.
Reviewed in https://reviews.llvm.org/D32829
llvm-svn: 302589
This patch allows Error and Expected types to be passed to and returned from
RPC functions.
Serializers and deserializers for custom error types (types deriving from the
ErrorInfo class template) can be registered with the SerializationTraits for
a given channel type (see registerStringError in RPCSerialization.h for an
example), allowing a given custom type to be sent/received. Unregistered types
will be serialized/deserialized as StringErrors using the custom type's log
message as the error string.
llvm-svn: 300167
The current ObjectLinkingLayer (now RTDyldObjectLinkingLayer) links objects
in-process using MCJIT's RuntimeDyld class. In the near future I hope to add new
object linking layers (e.g. a remote linking layer that links objects in the JIT
target process, rather than the client), so I'm renaming this class to be more
descriptive.
llvm-svn: 295636
negotiateFunction where appropriate.
Replacing the old ECError with a custom type allows us to attach the name of
the function that could not be negotiated, enabling better diagnostics for
negotiation failures.
llvm-svn: 292055
multiple asynchronous RPC calls.
ParallelCallGroup allows multiple asynchronous calls to be dispatched,
and provides a wait method that blocks until all asynchronous calls have
been executed on the remote and all return value handlers run on the
local machine.
This will allow, for example, the JIT client to issue memory allocation calls
for all sections in parallel, then block until all memory has been allocated
on the remote and the allocated addresses registered with the client, at which
point the JIT client can proceed to applying relocations.
llvm-svn: 290523
(1) Add support for function key negotiation.
The previous version of the RPC required both sides to maintain the same
enumeration for functions in the API. This means that any version skew between
the client and server would result in communication failure.
With this version of the patch functions (and serializable types) are defined
with string names, and the derived function signature strings are used to
negotiate the actual function keys (which are used for efficient call
serialization). This allows clients to connect to any server that supports a
superset of the API (based on the function signatures it supports).
(2) Add a callAsync primitive.
The callAsync primitive can be used to install a return value handler that will
run as soon as the RPC function's return value is sent back from the remote.
(3) Launch policies for RPC function handlers.
The new addHandler method, which installs handlers for RPC functions, takes two
arguments: (1) the handler itself, and (2) an optional "launch policy". When the
RPC function is called, the launch policy (if present) is invoked to actually
launch the handler. This allows the handler to be spawned on a background
thread, or added to a work list. If no launch policy is used, the handler is run
on the server thread itself. This should only be used for short-running
handlers, or entirely synchronous RPC APIs.
(4) Zero cost cross type serialization.
You can now define serialization from any type to a different "wire" type. For
example, this allows you to call an RPC function that's defined to take a
std::string while passing a StringRef argument. If a serializer from StringRef
to std::string has been defined for the channel type this will be used to
serialize the argument without having to construct a std::string instance.
This allows buffer reference types to be used as arguments to RPC calls without
requiring a copy of the buffer to be made.
llvm-svn: 286620