ollama/llm/llama.cpp
Daniel Hiltgen d4cd695759 Add cgo implementation for llama.cpp
Run the server.cpp directly inside the Go runtime via cgo
while retaining the LLM Go abstractions.
2023-12-19 09:05:46 -08:00
..
gguf@a7aee47b98 update runner submodule 2023-12-18 17:33:46 -05:00
patches Add cgo implementation for llama.cpp 2023-12-19 09:05:46 -08:00
gen_common.sh Add cgo implementation for llama.cpp 2023-12-19 09:05:46 -08:00
gen_darwin.sh Add cgo implementation for llama.cpp 2023-12-19 09:05:46 -08:00
gen_linux.sh Add cgo implementation for llama.cpp 2023-12-19 09:05:46 -08:00
gen_windows.ps1 Add cgo implementation for llama.cpp 2023-12-19 09:05:46 -08:00
generate_darwin.go Add cgo implementation for llama.cpp 2023-12-19 09:05:46 -08:00
generate_linux.go Add cgo implementation for llama.cpp 2023-12-19 09:05:46 -08:00
generate_windows.go Add cgo implementation for llama.cpp 2023-12-19 09:05:46 -08:00