llama.cpp/tools
Ed Addario d1aa0cc5d1
imatrix: add option to display importance score statistics for a given imatrix file (#12718)
* Add --show-statistics option

* Add --show-statistics logic

* Add tensor name parsing

* Tidy output format

* Fix typo in title

* Improve tensor influence ranking

* Add better statistics

* Change statistics' sort order

* Add Cosine Similarity

* Add header search path

* Change header search path to private

* Add weighted statistics per layer

* Update report title

* Refactor compute_statistics out of main

* Refactor compute_cossim out of load_imatrix

* Refactor compute_statistics out of load_imatrix

* Move imatrix statistics calculation into its own functions

* Add checks and validations

* Remove unnecessary include directory

* Rename labels

* Add m_stats getter and refactor compute_statistics out of load_imatrix

* Refactor variable names

* Minor cosmetic change

* Retrigger checks (empty commit)

* Rerun checks (empty commit)

* Fix unnecessary type promotion

Co-authored-by: compilade <git@compilade.net>

* Reverting change to improve code readability

* Rerun checks (empty commit)

* Rerun checks (empty commit)

* Rerun checks - third time's the Charm 🤞 (empty commit)

* Minor cosmetic change

* Update README

* Fix typo

* Update README

* Rerun checks (empty commit)

* Re-implement changes on top of #9400

* Update README.md

* Update README

* Update README.md

Co-authored-by: compilade <git@compilade.net>

* Update README.md

Co-authored-by: compilade <git@compilade.net>

* Update README.md

* Remove duplicate option in print_usage()

* Update README.md

* Update README.md

Co-authored-by: compilade <git@compilade.net>

* Update README.md

Co-authored-by: compilade <git@compilade.net>

* Remove input check

* Remove commented out code

---------

Co-authored-by: compilade <git@compilade.net>
2025-07-22 14:33:37 +02:00
..
batched-bench llama : add high-throughput mode (#14363) 2025-07-16 16:35:42 +03:00
cvector-generator llama : deprecate llama_kv_self_ API (#14030) 2025-06-06 14:11:15 +03:00
export-lora llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
gguf-split scripts : make the shell scripts cross-platform (#14341) 2025-06-30 10:17:18 +02:00
imatrix imatrix: add option to display importance score statistics for a given imatrix file (#12718) 2025-07-22 14:33:37 +02:00
llama-bench llama-bench : add --no-warmup flag (#14224) (#14270) 2025-06-19 12:24:12 +02:00
main llama : fix `--reverse-prompt` crashing issue (#14794) 2025-07-21 17:38:36 +08:00
mtmd Mtmd: add a way to select device for vision encoder (#14236) 2025-07-22 12:51:03 +02:00
perplexity llama : deprecate llama_kv_self_ API (#14030) 2025-06-06 14:11:15 +03:00
quantize imatrix : use GGUF to store importance matrices (#9400) 2025-07-19 12:51:22 -04:00
rpc rpc : Fix build on OpenBSD (#13541) 2025-05-25 15:35:53 +03:00
run cmake : do not search for curl libraries by ourselves (#14613) 2025-07-10 15:29:05 +03:00
server server : allow setting `--reverse-prompt` arg (#14799) 2025-07-22 09:24:22 +08:00
tokenize llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
tts sync : vendor (#13901) 2025-05-30 16:25:45 +03:00
CMakeLists.txt mtmd : rename llava directory to mtmd (#13311) 2025-05-05 16:02:55 +02:00