Third-party Slackware packages built with slackrepo on Slackware64-current. Use at your own risk. All packages are signed with my GPG key.
| Name | Last modified | Size | |
|---|---|---|---|
| Parent Directory | - | ||
| llama.cpp-vulkan-b9014-x86_64-1_danix.txz.sha256 | 2026-05-04 10:19 | 108 | |
| llama.cpp-vulkan-b9014-x86_64-1_danix.txz.md5 | 2026-05-04 13:15 | 76 | |
| llama.cpp-vulkan-b9014-x86_64-1_danix.txz.asc | 2026-05-04 13:15 | 870 | |
| llama.cpp-vulkan-b9014-x86_64-1_danix.txz | 2026-05-04 10:19 | 11M | |
| llama.cpp-vulkan-b9014-x86_64-1_danix.txt | 2026-05-04 13:15 | 515 | |
| llama.cpp-vulkan-b9014-x86_64-1_danix.meta | 2026-05-04 13:15 | 760 | |
| llama.cpp-vulkan-b9014-x86_64-1_danix.lst | 2026-05-04 13:15 | 8.8K | |
llama.cpp-vulkan (LLM inference in C/C++)
llama.cpp-vulkan:
Port of Facebook's LLaMA model in C/C++ with Vulkan GPU optimizations
llama.cpp-vulkan:
The main goal of llama.cpp is to enable LLM inference with minimal
setup and state-of-the-art performance on a wide range of hardware
locally and in the cloud.
llama.cpp-vulkan:
Home: https://github.com/ggml-org/llama.cpp
llama.cpp-vulkan:
llama.cpp-vulkan: