Third-party Slackware packages built with slackrepo on Slackware64-current. Use at your own risk. All packages are signed with my GPG key.
| Name | Last modified | Size | |
|---|---|---|---|
| Parent Directory | - | ||
| llama.cpp-vulkan-b9016-x86_64-1_danix.txz.asc | 2026-05-04 18:32 | 870 | |
| llama.cpp-vulkan-b9016-x86_64-1_danix.txz.sha256 | 2026-05-04 18:26 | 108 | |
| llama.cpp-vulkan-b9016-x86_64-1_danix.txz.md5 | 2026-05-04 18:26 | 76 | |
| llama.cpp-vulkan-b9016-x86_64-1_danix.meta | 2026-05-04 18:26 | 760 | |
| llama.cpp-vulkan-b9016-x86_64-1_danix.txt | 2026-05-04 18:26 | 515 | |
| llama.cpp-vulkan-b9016-x86_64-1_danix.lst | 2026-05-04 18:26 | 8.8K | |
| llama.cpp-vulkan-b9016-x86_64-1_danix.txz | 2026-05-04 18:26 | 11M | |
llama.cpp-vulkan (LLM inference in C/C++)
Port of Facebook's LLaMA model in C/C++ with Vulkan GPU optimizations
The main goal of llama.cpp is to enable LLM inference with minimal
setup and state-of-the-art performance on a wide range of hardware
locally and in the cloud.
Home: https://github.com/ggml-org/llama.cpp