![]() Currently if RPC servers are specified with '--rpc' and there is a local GPU available (e.g. CUDA), the benchmark will be performed only on the RPC device(s) but the backend result column will say "CUDA,RPC" which is incorrect. This patch is adding all local GPU devices and makes llama-bench consistent with llama-cli. |
||
---|---|---|
.. | ||
batched-bench | ||
cvector-generator | ||
export-lora | ||
gguf-split | ||
imatrix | ||
llama-bench | ||
main | ||
mtmd | ||
perplexity | ||
quantize | ||
rpc | ||
run | ||
server | ||
tokenize | ||
tts | ||
CMakeLists.txt |