![Bruce MacDonald](/assets/img/avatar_default.png)
* remove c code * pack llama.cpp * use request context for llama_cpp * let llama_cpp decide the number of threads to use * stop llama runner when app stops * remove sample count and duration metrics * use go generate to get libraries * tmp dir for running llm
4 lines
110 B
Plaintext
4 lines
110 B
Plaintext
[submodule "llm/llama.cpp/ggml"]
|
|
path = llm/llama.cpp/ggml
|
|
url = https://github.com/ggerganov/llama.cpp.git
|