llama.cpp/requirements
Johannes Gäßler 494c5899cb
scripts: benchmark for HTTP server throughput (#14668)
* scripts: benchmark for HTTP server throughput

* fix server connection reset
2025-07-14 13:14:30 +02:00
..
requirements-all.txt scripts: benchmark for HTTP server throughput (#14668) 2025-07-14 13:14:30 +02:00
requirements-compare-llama-bench.txt compare-llama-bench: add option to plot (#14169) 2025-06-14 10:34:20 +02:00
requirements-convert_hf_to_gguf.txt common: Include torch package for s390x (#13699) 2025-05-22 21:31:29 +03:00
requirements-convert_hf_to_gguf_update.txt common: Include torch package for s390x (#13699) 2025-05-22 21:31:29 +03:00
requirements-convert_legacy_llama.txt py : update transfomers version (#9694) 2024-09-30 18:03:47 +03:00
requirements-convert_llama_ggml_to_gguf.txt py : switch to snake_case (#8305) 2024-07-05 07:53:33 +03:00
requirements-convert_lora_to_gguf.txt common: Include torch package for s390x (#13699) 2025-05-22 21:31:29 +03:00
requirements-gguf_editor_gui.txt gguf-py : add support for sub_type (in arrays) in GGUFWriter add_key_value method (#13561) 2025-05-29 15:36:05 +02:00
requirements-pydantic.txt pydantic : replace uses of __annotations__ with get_type_hints (#8474) 2024-07-14 19:51:21 -04:00
requirements-server-bench.txt scripts: benchmark for HTTP server throughput (#14668) 2025-07-14 13:14:30 +02:00
requirements-test-tokenizer-random.txt py : type-check all Python scripts with Pyright (#8341) 2024-07-07 15:04:39 -04:00
requirements-tool_bench.txt `tool-call`: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (#12034) 2025-03-05 13:05:13 +00:00