llama.cpp/ggml
Aman Gupta 55c2646b45
CUDA: add dynamic shared mem to softmax, refactor general usage (#14497)
2025-07-03 07:45:11 +08:00
..
cmake ggml-cpu : rework weak alias on apple targets (#14146) 2025-06-16 13:54:15 +08:00
include llama : initial Mamba-2 support (#9126) 2025-07-02 13:10:24 -04:00
src CUDA: add dynamic shared mem to softmax, refactor general usage (#14497) 2025-07-03 07:45:11 +08:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt ggml : add version function to get lib version (ggml/1286) 2025-07-02 20:08:45 +03:00