[ICO]NameLast modifiedSize

[PARENTDIR]Parent Directory  -
[TXT]llama.cpp-vulkan-b9016-x86_64-1_danix.lst2026-05-04 18:26 8.8K
[TXT]llama.cpp-vulkan-b9016-x86_64-1_danix.meta2026-05-04 18:26 760
[TXT]llama.cpp-vulkan-b9016-x86_64-1_danix.txt2026-05-04 18:26 515
[   ]llama.cpp-vulkan-b9016-x86_64-1_danix.txz2026-05-04 18:26 11M
[TXT]llama.cpp-vulkan-b9016-x86_64-1_danix.txz.asc2026-05-04 18:32 870
[TXT]llama.cpp-vulkan-b9016-x86_64-1_danix.txz.md52026-05-04 18:26 76
[TXT]llama.cpp-vulkan-b9016-x86_64-1_danix.txz.sha2562026-05-04 18:26 108

Package description llama.cpp-vulkan

llama.cpp-vulkan (LLM inference in C/C++)

Port of Facebook's LLaMA model in C/C++ with Vulkan GPU optimizations

The main goal of llama.cpp is to enable LLM inference with minimal

setup and state-of-the-art performance on a wide range of hardware

locally and in the cloud.

Home: https://github.com/ggml-org/llama.cpp

Size (compressed): 11720 K Size (uncompressed): 84537 K