llama.cpp/tools
Molly Sophia 72c6bc3f3d
llama : better rwkv chat template and add missing `inputs.use_jinja` setting (#14336)
* llama-cli : add missing `inputs.use_jinja` setting

Signed-off-by: Molly Sophia <mollysophia379@gmail.com>

* llama : better legacy chat template for rwkv

Signed-off-by: Molly Sophia <mollysophia379@gmail.com>

---------

Signed-off-by: Molly Sophia <mollysophia379@gmail.com>
2025-06-23 19:56:19 +08:00
..
batched-bench llama : deprecate llama_kv_self_ API (#14030) 2025-06-06 14:11:15 +03:00
cvector-generator llama : deprecate llama_kv_self_ API (#14030) 2025-06-06 14:11:15 +03:00
export-lora llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
gguf-split llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
imatrix llama : deprecate llama_kv_self_ API (#14030) 2025-06-06 14:11:15 +03:00
llama-bench llama-bench : add --no-warmup flag (#14224) (#14270) 2025-06-19 12:24:12 +02:00
main llama : better rwkv chat template and add missing `inputs.use_jinja` setting (#14336) 2025-06-23 19:56:19 +08:00
mtmd mtmd : fix Pixtral OOM with large images by capping image_size to 1024 (#14326) 2025-06-22 14:44:57 +02:00
perplexity llama : deprecate llama_kv_self_ API (#14030) 2025-06-06 14:11:15 +03:00
quantize quantize : handle user-defined pruning of whole layers (blocks) (#13037) 2025-06-22 23:16:26 +02:00
rpc rpc : Fix build on OpenBSD (#13541) 2025-05-25 15:35:53 +03:00
run run : avoid double tokenization (#14327) 2025-06-23 01:28:06 +08:00
server kv-cells : fix tracking of seq_pos (#14339) 2025-06-23 12:27:35 +03:00
tokenize llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
tts sync : vendor (#13901) 2025-05-30 16:25:45 +03:00
CMakeLists.txt mtmd : rename llava directory to mtmd (#13311) 2025-05-05 16:02:55 +02:00