[ICO]NameLast modifiedSize

[PARENTDIR]Parent Directory  -
[TXT]llama.cpp-vulkan-b9014-x86_64-1_danix.txz.md52026-05-04 13:15 76
[TXT]llama.cpp-vulkan-b9014-x86_64-1_danix.txz.sha2562026-05-04 10:19 108
[TXT]llama.cpp-vulkan-b9014-x86_64-1_danix.txt2026-05-04 13:15 515
[TXT]llama.cpp-vulkan-b9014-x86_64-1_danix.meta2026-05-04 13:15 760
[TXT]llama.cpp-vulkan-b9014-x86_64-1_danix.txz.asc2026-05-04 13:15 870
[TXT]llama.cpp-vulkan-b9014-x86_64-1_danix.lst2026-05-04 13:15 8.8K
[   ]llama.cpp-vulkan-b9014-x86_64-1_danix.txz2026-05-04 10:19 11M

Package description llama.cpp-vulkan

llama.cpp-vulkan (LLM inference in C/C++)

llama.cpp-vulkan:

Port of Facebook's LLaMA model in C/C++ with Vulkan GPU optimizations

llama.cpp-vulkan:

The main goal of llama.cpp is to enable LLM inference with minimal

setup and state-of-the-art performance on a wide range of hardware

locally and in the cloud.

llama.cpp-vulkan:

Home: https://github.com/ggml-org/llama.cpp

llama.cpp-vulkan:

llama.cpp-vulkan:

Size (compressed): 11728 K Size (uncompressed): 84560 K