Files
clang-p2996/llvm/lib/ExecutionEngine/Orc/TaskDispatch.cpp
Lang Hames 6d72bf4760 [ORC][llvm-jitlink] Add SimpleLazyReexportsSpeculator, use in llvm-jitlink.
Also adds a new IdleTask type and updates DynamicThreadPoolTaskDispatcher to
schedule IdleTasks whenever the total number of threads running is less than
the maximum number of MaterializationThreads.

A SimpleLazyReexportsSpeculator instance maintains a list of speculation
suggestions ((JITDylib, Function) pairs) and registered lazy reexports. When
speculation opportunities are available (having been added via
addSpeculationSuggestions or when lazy reexports were created) it schedules
an IdleTask that triggers the next speculative lookup as soon as resources
are available. Speculation suggestions are processed first, followed by
lookups for lazy reexport bodies. A callback can be registered at object
construction time to record lazy reexport executions as they happen, and these
executions can be fed back into the speculator as suggestions on subsequent
executions.

The llvm-jitlink tool is updated to support speculation when lazy linking is
used via three new arguments:

 -speculate=[none|simple] : When the 'simple' value is specified a
                            SimpleLazyReexportsSpeculator instances is used
                            for speculation.

 -speculate-order <path> : Specifies a path to a CSV containing
                           (jit dylib name, function name) triples to use
                           as speculative suggestions in the current run.

 -record-lazy-execs <path> : Specifies a path in which to record lazy function
                             executions as a CSV of (jit dylib name, function
                             name) pairs, suitable for use with
                             -speculate-order.

The same path can be passed to -speculate-order and -record-lazy-execs, in
which case the file will be overwritten at the end of the execution.

No testcase yet: Speculative linking is difficult to test (since by definition
execution behavior should be unaffected by speculation) and this is an new
prototype of the concept*. Tests will be added in the future once the interface
and behavior settle down.

* An earlier implementation of the speculation concept can be found in
  llvm/include/llvm/ExecutionEngine/Orc/Speculation.h. Both systems have the
  same goal (hiding compilation latency) but different mechanisms. This patch
  relies entirely on information available in the controller, where the old
  system could receive additional information from the JIT'd runtime via
  callbacks. I aim to combine the two in the future, but want to gain more
  practical experience with speculation first.
2025-01-10 11:48:08 +11:00

128 lines
3.8 KiB
C++

//===------------ TaskDispatch.cpp - ORC task dispatch utils --------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
#include "llvm/ExecutionEngine/Orc/TaskDispatch.h"
#include "llvm/Config/llvm-config.h" // for LLVM_ENABLE_THREADS
#include "llvm/ExecutionEngine/Orc/Core.h"
namespace llvm {
namespace orc {
char Task::ID = 0;
char GenericNamedTask::ID = 0;
char IdleTask::ID = 0;
const char *GenericNamedTask::DefaultDescription = "Generic Task";
void Task::anchor() {}
void IdleTask::anchor() {}
TaskDispatcher::~TaskDispatcher() = default;
void InPlaceTaskDispatcher::dispatch(std::unique_ptr<Task> T) { T->run(); }
void InPlaceTaskDispatcher::shutdown() {}
#if LLVM_ENABLE_THREADS
void DynamicThreadPoolTaskDispatcher::dispatch(std::unique_ptr<Task> T) {
enum { Normal, Materialization, Idle } TaskKind;
if (isa<MaterializationTask>(*T))
TaskKind = Materialization;
else if (isa<IdleTask>(*T))
TaskKind = Idle;
else
TaskKind = Normal;
{
std::lock_guard<std::mutex> Lock(DispatchMutex);
// Reject new tasks if they're dispatched after a call to shutdown.
if (Shutdown)
return;
if (TaskKind == Materialization) {
// If this is a materialization task and there are too many running
// already then queue this one up and return early.
if (!canRunMaterializationTaskNow())
return MaterializationTaskQueue.push_back(std::move(T));
// Otherwise record that we have a materialization task running.
++NumMaterializationThreads;
} else if (TaskKind == Idle) {
if (!canRunIdleTaskNow())
return IdleTaskQueue.push_back(std::move(T));
}
++Outstanding;
}
std::thread([this, T = std::move(T), TaskKind]() mutable {
while (true) {
// Run the task.
T->run();
// Reset the task to free any resources. We need this to happen *before*
// we notify anyone (via Outstanding) that this thread is done to ensure
// that we don't proceed with JIT shutdown while still holding resources.
// (E.g. this was causing "Dangling SymbolStringPtr" assertions).
T.reset();
// Check the work queue state and either proceed with the next task or
// end this thread.
std::lock_guard<std::mutex> Lock(DispatchMutex);
if (TaskKind == Materialization)
--NumMaterializationThreads;
--Outstanding;
if (!MaterializationTaskQueue.empty() && canRunMaterializationTaskNow()) {
// If there are any materialization tasks running then steal that work.
T = std::move(MaterializationTaskQueue.front());
MaterializationTaskQueue.pop_front();
TaskKind = Materialization;
++NumMaterializationThreads;
++Outstanding;
} else if (!IdleTaskQueue.empty() && canRunIdleTaskNow()) {
T = std::move(IdleTaskQueue.front());
IdleTaskQueue.pop_front();
TaskKind = Idle;
++Outstanding;
} else {
if (Outstanding == 0)
OutstandingCV.notify_all();
return;
}
}
}).detach();
}
void DynamicThreadPoolTaskDispatcher::shutdown() {
std::unique_lock<std::mutex> Lock(DispatchMutex);
Shutdown = true;
OutstandingCV.wait(Lock, [this]() { return Outstanding == 0; });
}
bool DynamicThreadPoolTaskDispatcher::canRunMaterializationTaskNow() {
return !MaxMaterializationThreads ||
(NumMaterializationThreads < *MaxMaterializationThreads);
}
bool DynamicThreadPoolTaskDispatcher::canRunIdleTaskNow() {
return !MaxMaterializationThreads ||
(Outstanding < *MaxMaterializationThreads);
}
#endif
} // namespace orc
} // namespace llvm