docs(README): replace llama.cpp link with python bindings

This commit is contained in:
Luc Georges 2024-02-12 11:07:31 +01:00
parent 4891468c1a
commit 8926969265
No known key found for this signature in database
GPG key ID: 22924A120A2C2CE0
2 changed files with 5 additions and 1 deletions

View file

@ -27,7 +27,7 @@ Note that **llm-ls** does not export any data anywhere (other than setting a use
### Multiple backends
**llm-ls** is compatible with Hugging Face's [Inference API](https://huggingface.co/docs/api-inference/en/index), Hugging Face's [text-generation-inference](https://github.com/huggingface/text-generation-inference), [ollama](https://github.com/ollama/ollama) and OpenAI compatible APIs, like [llama.cpp](https://github.com/ggerganov/llama.cpp/tree/master/examples/server).
**llm-ls** is compatible with Hugging Face's [Inference API](https://huggingface.co/docs/api-inference/en/index), Hugging Face's [text-generation-inference](https://github.com/huggingface/text-generation-inference), [ollama](https://github.com/ollama/ollama) and OpenAI compatible APIs, like the [python llama.cpp server bindings](https://github.com/abetlen/llama-cpp-python?tab=readme-ov-file#openai-compatible-web-server).
## Compatible extensions

View file

@ -60,6 +60,10 @@ pub enum Backend {
#[serde(default = "hf_default_url")]
url: String,
},
// TODO:
// LlamaCpp {
// url: String,
// },
Ollama {
url: String,
},