2023-07-18 15:45:38 -04:00
< div align = "center" >
2024-05-01 13:39:38 -04:00
< img alt = "ollama" height = "200px" src = "https://github.com/ollama/ollama/assets/3325447/0d0b44e2-8f4a-4e99-9b52-a5c1c741c8f7" >
2023-07-18 15:45:38 -04:00
< / div >
2023-07-05 15:37:33 -04:00
2023-06-27 12:08:52 -04:00
# Ollama
2023-06-22 12:45:31 -04:00
2023-07-19 15:31:48 -04:00
[![Discord ](https://dcbadge.vercel.app/api/server/ollama?style=flat&compact=true )](https://discord.gg/ollama)
2023-07-19 15:28:50 -04:00
2024-06-04 16:25:25 -04:00
Get up and running with large language models.
2023-07-20 11:33:28 -04:00
2024-09-15 11:03:58 -04:00
### Linux with rx580 Radeon GPU
This branch is has had changes for building on amd64 architecture(arm has been commented out in the Docker file) so as Ollama works with
rx590 Redeon GPU.
It should be considered experimental.
2024-09-15 11:07:28 -04:00
I've only been testing using the docker build, using ubuntu 22.04.04 LTS
2024-09-15 11:03:58 -04:00
Make sure docker is installed and running ok, and the docker host machine has rocm 5.7.1 libraries installed.
Follow this documentation for rocm installation, just substitute the 5.7.0 references to 5.7.1 in the documentation.
--https://rocm.docs.amd.com/en/docs-5.7.0/deploy/linux/os-native/install.html
To build
```
./scripts/build_docker.sh
```
After that has compiled successfully
Then to start a container using the image
```
docker run -e HIP_PATH=/opt/rocm/lib/ -e LD_LIBRARY_PATH=/opt/rocm/lib --device /dev/kfd --device /dev/dri -v ollama:/root/.ollama -p 11434:11434 --name ollama_gpu ollama/release:0.3.10-rc1-2-g56318fb-dirty-rocm
```
But make sure to change the tag "0.3.10-rc1-2-g56318fb-dirty-rocm" to what gets built from your build process. This is shown in the last phase of the build where it exports the images.
Once running, test it out
```
docker exec -it ollama_gpu ollama run llama3.1
```
2023-09-26 02:44:53 -04:00
### macOS
2023-08-08 18:50:23 -04:00
2024-02-09 18:19:30 -05:00
[Download ](https://ollama.com/download/Ollama-darwin.zip )
2023-07-18 16:31:25 -04:00
2024-02-15 13:32:40 -05:00
### Windows preview
2023-10-15 02:23:03 -04:00
2024-02-15 13:32:40 -05:00
[Download ](https://ollama.com/download/OllamaSetup.exe )
2023-10-15 02:23:03 -04:00
2024-02-15 13:32:40 -05:00
### Linux
2023-09-26 02:44:53 -04:00
```
2024-02-09 18:19:30 -05:00
curl -fsSL https://ollama.com/install.sh | sh
2023-09-26 02:44:53 -04:00
```
2024-03-26 16:04:17 -04:00
[Manual install instructions ](https://github.com/ollama/ollama/blob/main/docs/linux.md )
2023-09-26 02:44:53 -04:00
2023-10-15 02:23:03 -04:00
### Docker
2023-09-26 02:44:53 -04:00
2023-11-04 15:24:24 -04:00
The official [Ollama Docker image ](https://hub.docker.com/r/ollama/ollama ) `ollama/ollama` is available on Docker Hub.
2023-07-18 16:31:25 -04:00
2024-01-23 19:08:15 -05:00
### Libraries
- [ollama-python ](https://github.com/ollama/ollama-python )
- [ollama-js ](https://github.com/ollama/ollama-js )
2023-07-19 15:28:50 -04:00
## Quickstart
2024-07-28 17:21:38 -04:00
To run and chat with [Llama 3.1 ](https://ollama.com/library/llama3.1 ):
2023-07-19 15:28:50 -04:00
```
2024-07-28 17:21:38 -04:00
ollama run llama3.1
2023-07-19 15:28:50 -04:00
```
## Model library
2024-02-19 20:15:24 -05:00
Ollama supports a list of models available on [ollama.com/library ](https://ollama.com/library 'ollama model library' )
2023-08-16 22:53:27 -04:00
2024-02-19 20:15:24 -05:00
Here are some example models that can be downloaded:
2023-07-19 15:28:50 -04:00
2023-09-26 02:44:53 -04:00
| Model | Parameters | Size | Download |
| ------------------ | ---------- | ----- | ------------------------------ |
2024-07-28 17:21:38 -04:00
| Llama 3.1 | 8B | 4.7GB | `ollama run llama3.1` |
| Llama 3.1 | 70B | 40GB | `ollama run llama3.1:70b` |
| Llama 3.1 | 405B | 231GB | `ollama run llama3.1:405b` |
2024-05-22 12:53:45 -04:00
| Phi 3 Mini | 3.8B | 2.3GB | `ollama run phi3` |
| Phi 3 Medium | 14B | 7.9GB | `ollama run phi3:medium` |
2024-08-04 10:58:39 -04:00
| Gemma 2 | 2B | 1.6GB | `ollama run gemma2:2b` |
2024-06-27 12:45:16 -04:00
| Gemma 2 | 9B | 5.5GB | `ollama run gemma2` |
| Gemma 2 | 27B | 16GB | `ollama run gemma2:27b` |
2023-12-19 12:59:03 -05:00
| Mistral | 7B | 4.1GB | `ollama run mistral` |
2024-05-22 12:53:45 -04:00
| Moondream 2 | 1.4B | 829MB | `ollama run moondream` |
2023-11-28 22:16:37 -05:00
| Neural Chat | 7B | 4.1GB | `ollama run neural-chat` |
| Starling | 7B | 4.1GB | `ollama run starling-lm` |
2023-09-26 02:44:53 -04:00
| Code Llama | 7B | 3.8GB | `ollama run codellama` |
| Llama 2 Uncensored | 7B | 3.8GB | `ollama run llama2-uncensored` |
2023-12-13 14:38:47 -05:00
| LLaVA | 7B | 4.5GB | `ollama run llava` |
2024-04-15 19:54:23 -04:00
| Solar | 10.7B | 6.1GB | `ollama run solar` |
2023-07-19 15:28:50 -04:00
2024-07-22 16:34:56 -04:00
> [!NOTE]
> You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
2023-07-20 15:21:29 -04:00
2023-12-23 11:18:12 -05:00
## Customize a model
2023-06-27 17:13:07 -04:00
2023-10-15 02:23:03 -04:00
### Import from GGUF
2023-09-01 10:54:31 -04:00
2023-10-15 02:23:03 -04:00
Ollama supports importing GGUF models in the Modelfile:
2023-09-01 10:54:31 -04:00
2023-10-15 02:23:03 -04:00
1. Create a file named `Modelfile` , with a `FROM` instruction with the local filepath to the model you want to import.
2023-09-01 10:54:31 -04:00
2023-09-26 02:44:53 -04:00
```
FROM ./vicuna-33b.Q4_0.gguf
```
2023-06-30 12:39:25 -04:00
2023-10-11 19:24:06 -04:00
2. Create the model in Ollama
2023-06-22 12:45:31 -04:00
2023-09-26 02:44:53 -04:00
```
2023-10-15 02:23:03 -04:00
ollama create example -f Modelfile
2023-09-26 02:44:53 -04:00
```
2023-08-10 11:22:28 -04:00
2023-10-11 19:24:06 -04:00
3. Run the model
2023-09-01 16:44:14 -04:00
2023-09-26 02:44:53 -04:00
```
2023-10-15 02:23:03 -04:00
ollama run example
2023-09-26 02:44:53 -04:00
```
2023-09-01 16:44:14 -04:00
2023-10-15 02:23:03 -04:00
### Import from PyTorch or Safetensors
See the [guide ](docs/import.md ) on importing models for more information.
2023-09-26 02:44:53 -04:00
### Customize a prompt
2023-09-01 16:44:14 -04:00
2024-07-28 17:21:38 -04:00
Models from the Ollama library can be customized with a prompt. For example, to customize the `llama3.1` model:
2023-07-19 15:28:50 -04:00
```
2024-07-28 17:21:38 -04:00
ollama pull llama3.1
2023-07-19 15:28:50 -04:00
```
2023-08-08 18:41:48 -04:00
2023-07-18 16:22:33 -04:00
Create a `Modelfile` :
2023-07-05 15:37:33 -04:00
2023-06-30 12:31:00 -04:00
```
2024-07-28 17:21:38 -04:00
FROM llama3.1
2023-07-20 11:17:09 -04:00
# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1
2023-12-12 14:43:19 -05:00
# set the system message
2023-07-20 05:21:51 -04:00
SYSTEM """
2023-07-18 16:32:06 -04:00
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
2023-07-18 16:22:33 -04:00
"""
2023-06-29 18:25:02 -04:00
```
2023-07-07 16:14:58 -04:00
2023-07-18 16:22:33 -04:00
Next, create and run the model:
2023-07-07 16:14:58 -04:00
```
2023-07-18 16:22:33 -04:00
ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.
2023-07-07 16:14:58 -04:00
```
2023-10-11 19:24:06 -04:00
For more examples, see the [examples ](examples ) directory. For more information on working with a Modelfile, see the [Modelfile ](docs/modelfile.md ) documentation.
2023-09-26 02:44:53 -04:00
## CLI Reference
### Create a model
`ollama create` is used to create a model from a Modelfile.
2023-07-19 15:28:50 -04:00
2023-12-22 12:10:01 -05:00
```
ollama create mymodel -f ./Modelfile
```
2023-09-26 02:44:53 -04:00
### Pull a model
2023-07-06 16:21:01 -04:00
2023-07-19 15:28:50 -04:00
```
2024-07-28 17:21:38 -04:00
ollama pull llama3.1
2023-07-19 15:28:50 -04:00
```
2023-06-28 09:57:36 -04:00
2023-09-26 02:44:53 -04:00
> This command can also be used to update a local model. Only the diff will be pulled.
### Remove a model
2023-07-20 15:21:29 -04:00
```
2024-07-28 17:21:38 -04:00
ollama rm llama3.1
2023-07-20 15:21:29 -04:00
```
2023-09-26 02:44:53 -04:00
### Copy a model
```
2024-07-28 17:21:38 -04:00
ollama cp llama3.1 my-model
2023-09-26 02:44:53 -04:00
```
### Multiline input
For multiline input, you can wrap text with `"""` :
```
>>> """Hello,
... world!
... """
I'm a basic program that prints the famous "Hello, world!" message to the console.
```
2023-12-13 14:38:47 -05:00
### Multimodal models
```
2024-07-30 21:08:34 -04:00
ollama run llava "What's in this image? /Users/jmorgan/Desktop/smile.png"
2023-12-13 14:38:47 -05:00
The image features a yellow smiley face, which is likely the central focus of the picture.
```
2024-05-01 13:39:38 -04:00
### Pass the prompt as an argument
2023-09-26 02:44:53 -04:00
```
2024-07-28 17:21:38 -04:00
$ ollama run llama3.1 "Summarize this file: $(cat README.md)"
2023-09-26 02:44:53 -04:00
Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.
```
2024-06-21 18:52:09 -04:00
### Show model information
```
2024-07-28 17:21:38 -04:00
ollama show llama3.1
2024-06-21 18:52:09 -04:00
```
2023-09-26 02:44:53 -04:00
### List models on your computer
2023-07-20 15:21:29 -04:00
2023-09-26 02:44:53 -04:00
```
ollama list
```
2023-07-20 15:21:29 -04:00
2023-09-26 02:44:53 -04:00
### Start Ollama
2023-07-20 15:21:29 -04:00
2023-09-26 02:44:53 -04:00
`ollama serve` is used when you want to start ollama without running the desktop application.
2023-07-20 15:21:29 -04:00
2023-07-03 16:32:48 -04:00
## Building
2024-05-23 17:24:07 -04:00
See the [developer guide ](https://github.com/ollama/ollama/blob/main/docs/development.md )
2023-10-16 20:41:40 -04:00
### Running local builds
2024-02-01 14:16:24 -05:00
2023-08-30 17:54:02 -04:00
Next, start the server:
2023-06-27 13:46:46 -04:00
2023-07-05 15:37:33 -04:00
```
2023-08-30 17:54:02 -04:00
./ollama serve
2023-07-05 15:37:33 -04:00
```
2023-09-01 10:54:31 -04:00
Finally, in a separate shell, run a model:
2023-07-05 15:37:33 -04:00
```
2024-07-28 17:21:38 -04:00
./ollama run llama3.1
2023-07-05 15:37:33 -04:00
```
2023-07-21 03:47:17 -04:00
## REST API
2023-10-27 03:10:23 -04:00
Ollama has a REST API for running and managing models.
2023-12-10 13:53:36 -05:00
### Generate a response
2023-07-21 03:47:17 -04:00
```
2023-11-17 09:50:38 -05:00
curl http://localhost:11434/api/generate -d '{
2024-07-28 17:21:38 -04:00
"model": "llama3.1",
2023-08-08 18:48:47 -04:00
"prompt":"Why is the sky blue?"
}'
2023-07-23 18:01:05 -04:00
```
2023-07-31 16:59:39 -04:00
2023-12-10 13:53:36 -05:00
### Chat with a model
2023-12-05 14:57:33 -05:00
```
curl http://localhost:11434/api/chat -d '{
2024-07-28 17:21:38 -04:00
"model": "llama3.1",
2023-12-07 12:41:56 -05:00
"messages": [
{ "role": "user", "content": "why is the sky blue?" }
2023-12-05 14:57:33 -05:00
]
}'
```
2023-10-27 03:10:23 -04:00
See the [API documentation ](./docs/api.md ) for all endpoints.
2023-09-26 02:44:53 -04:00
## Community Integrations
2023-11-08 03:03:29 -05:00
### Web & Desktop
2024-02-01 14:16:24 -05:00
2024-04-22 20:29:08 -04:00
- [Open WebUI ](https://github.com/open-webui/open-webui )
- [Enchanted (macOS native) ](https://github.com/AugustDev/enchanted )
2024-05-07 16:17:35 -04:00
- [Hollama ](https://github.com/fmaclen/hollama )
2024-04-01 11:16:31 -04:00
- [Lollms-Webui ](https://github.com/ParisNeo/lollms-webui )
2024-03-25 14:59:18 -04:00
- [LibreChat ](https://github.com/danny-avila/LibreChat )
2023-12-15 14:33:04 -05:00
- [Bionic GPT ](https://github.com/bionic-gpt/bionic-gpt )
2023-10-28 17:02:13 -04:00
- [HTML UI ](https://github.com/rtcfirefly/ollama-ui )
2024-03-25 14:54:09 -04:00
- [Saddle ](https://github.com/jikkuatwork/saddle )
2023-10-28 17:18:38 -04:00
- [Chatbot UI ](https://github.com/ivanfioravanti/chatbot-ollama )
2024-04-22 20:09:55 -04:00
- [Chatbot UI v2 ](https://github.com/mckaywrigley/chatbot-ui )
2023-10-28 17:02:13 -04:00
- [Typescript UI ](https://github.com/ollama-interface/Ollama-Gui?tab=readme-ov-file )
2023-10-28 17:18:38 -04:00
- [Minimalistic React UI for Ollama Models ](https://github.com/richawo/minimal-llm-ui )
2023-11-08 14:01:09 -05:00
- [Ollamac ](https://github.com/kevinhermawan/Ollamac )
2024-02-21 01:24:48 -05:00
- [big-AGI ](https://github.com/enricoros/big-AGI/blob/main/docs/config-local-ollama.md )
2023-11-16 11:30:54 -05:00
- [Cheshire Cat assistant framework ](https://github.com/cheshire-cat-ai/core )
2023-11-27 10:44:37 -05:00
- [Amica ](https://github.com/semperai/amica )
2023-11-29 21:18:21 -05:00
- [chatd ](https://github.com/BruceMacD/chatd )
2024-01-02 09:47:50 -05:00
- [Ollama-SwiftUI ](https://github.com/kghandour/Ollama-SwiftUI )
2024-03-25 15:06:39 -04:00
- [Dify.AI ](https://github.com/langgenius/dify )
2024-01-31 10:48:37 -05:00
- [MindMac ](https://mindmac.app )
2024-02-19 21:57:36 -05:00
- [NextJS Web Interface for Ollama ](https://github.com/jakobhoeg/nextjs-ollama-llm-ui )
2024-02-20 14:03:33 -05:00
- [Msty ](https://msty.app )
2024-02-23 07:17:28 -05:00
- [Chatbox ](https://github.com/Bin-Huang/Chatbox )
2024-03-04 03:40:36 -05:00
- [WinForm Ollama Copilot ](https://github.com/tgraupmann/WinForm_Ollama_Copilot )
2024-02-29 15:12:13 -05:00
- [NextChat ](https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web ) with [Get Started Doc ](https://docs.nextchat.dev/models/ollama )
2024-03-25 15:00:18 -04:00
- [Alpaca WebUI ](https://github.com/mmo80/alpaca-webui )
2024-03-25 14:58:28 -04:00
- [OllamaGUI ](https://github.com/enoch1118/ollamaGUI )
2024-03-25 14:57:40 -04:00
- [OpenAOE ](https://github.com/InternLM/OpenAOE )
2024-03-06 14:57:49 -05:00
- [Odin Runes ](https://github.com/leonid20000/OdinRunes )
2024-05-06 16:42:16 -04:00
- [LLM-X ](https://github.com/mrdjohnson/llm-x ) (Progressive Web App)
2024-03-25 14:54:48 -04:00
- [AnythingLLM (Docker + MacOs/Windows/Linux native app) ](https://github.com/Mintplex-Labs/anything-llm )
2024-03-26 10:46:36 -04:00
- [Ollama Basic Chat: Uses HyperDiv Reactive UI ](https://github.com/rapidarchitect/ollama_basic_chat )
- [Ollama-chats RPG ](https://github.com/drazdra/ollama-chats )
2024-05-06 16:42:16 -04:00
- [QA-Pilot ](https://github.com/reid41/QA-Pilot ) (Chat with Code Repository)
- [ChatOllama ](https://github.com/sugarforever/chat-ollama ) (Open Source Chatbot based on Ollama with Knowledge Bases)
- [CRAG Ollama Chat ](https://github.com/Nagi-ovo/CRAG-Ollama-Chat ) (Simple Web Search with Corrective RAG)
- [RAGFlow ](https://github.com/infiniflow/ragflow ) (Open-source Retrieval-Augmented Generation engine based on deep document understanding)
- [StreamDeploy ](https://github.com/StreamDeploy-DevRel/streamdeploy-llm-app-scaffold ) (LLM Application Scaffold)
- [chat ](https://github.com/swuecho/chat ) (chat web app for teams)
2024-04-22 20:18:15 -04:00
- [Lobe Chat ](https://github.com/lobehub/lobe-chat ) with [Integrating Doc ](https://lobehub.com/docs/self-hosting/examples/ollama )
2024-05-06 16:42:16 -04:00
- [Ollama RAG Chatbot ](https://github.com/datvodinh/rag-chatbot.git ) (Local Chat with multiple PDFs using Ollama and RAG)
- [BrainSoup ](https://www.nurgo-software.com/products/brainsoup ) (Flexible native client with RAG & multi-agent automation)
2024-05-07 16:31:34 -04:00
- [macai ](https://github.com/Renset/macai ) (macOS client for Ollama, ChatGPT, and other compatible API back-ends)
2024-05-27 20:22:01 -04:00
- [Olpaka ](https://github.com/Otacon/olpaka ) (User-friendly Flutter Web App for Ollama)
2024-05-27 22:58:26 -04:00
- [OllamaSpring ](https://github.com/CrazyNeil/OllamaSpring ) (Ollama Client for macOS)
2024-06-04 17:43:59 -04:00
- [LLocal.in ](https://github.com/kartikm7/llocal ) (Easy to use Electron Desktop Client for Ollama)
2024-09-05 16:10:44 -04:00
- [AiLama ](https://github.com/zeyoyt/ailama ) (A Discord User App that allows you to interact with Ollama anywhere in discord )
2024-06-30 22:00:57 -04:00
- [Ollama with Google Mesop ](https://github.com/rapidarchitect/ollama_mesop/ ) (Mesop Chat Client implementation with Ollama)
2024-09-03 16:15:54 -04:00
- [Painting Droid ](https://github.com/mateuszmigas/painting-droid ) (Painting app with AI integrations)
2024-07-13 11:33:46 -04:00
- [Kerlig AI ](https://www.kerlig.com/ ) (AI writing assistant for macOS)
2024-07-16 17:24:27 -04:00
- [AI Studio ](https://github.com/MindWorkAI/AI-Studio )
2024-07-17 13:24:44 -04:00
- [Sidellama ](https://github.com/gyopak/sidellama ) (browser-based LLM client)
2024-07-23 14:40:23 -04:00
- [LLMStack ](https://github.com/trypromptly/LLMStack ) (No-code multi-agent framework to build LLM agents and workflows)
2024-07-31 11:44:58 -04:00
- [BoltAI for Mac ](https://boltai.com ) (AI Chat Client for Mac)
2024-08-02 20:03:46 -04:00
- [Harbor ](https://github.com/av/harbor ) (Containerized LLM Toolkit with Ollama as default backend)
2024-09-02 15:34:26 -04:00
- [Go-CREW ](https://www.jonathanhecl.com/go-crew/ ) (Powerful Offline RAG in Golang)
2024-09-03 12:28:01 -04:00
- [PartCAD ](https://github.com/openvmp/partcad/ ) (CAD model generation with OpenSCAD and CadQuery)
2024-09-03 16:08:50 -04:00
- [Ollama4j Web UI ](https://github.com/ollama4j/ollama4j-web-ui ) - Java-based Web UI for Ollama built with Vaadin, Spring Boot and Ollama4j
2024-09-03 23:10:53 -04:00
- [PyOllaMx ](https://github.com/kspviswa/pyOllaMx ) - macOS application capable of chatting with both Ollama and Apple MLX models.
2024-09-04 09:32:26 -04:00
- [Claude Dev ](https://github.com/saoudrizwan/claude-dev ) - VSCode extension for multi-file/whole-repo coding
2024-09-04 10:53:36 -04:00
- [Cherry Studio ](https://github.com/kangfenmao/cherry-studio ) (Desktop client with Ollama support)
2024-09-04 17:26:02 -04:00
- [ConfiChat ](https://github.com/1runeberg/confichat ) (Lightweight, standalone, multi-platform, and privacy focused LLM chat interface with optional encryption)
2023-11-08 03:03:29 -05:00
### Terminal
2023-11-08 02:52:31 -05:00
2023-10-28 17:02:13 -04:00
- [oterm ](https://github.com/ggozad/oterm )
- [Ellama Emacs client ](https://github.com/s-kostyaev/ellama )
- [Emacs client ](https://github.com/zweifisch/ollama )
2023-11-06 14:27:02 -05:00
- [gen.nvim ](https://github.com/David-Kunz/gen.nvim )
2023-11-13 17:00:17 -05:00
- [ollama.nvim ](https://github.com/nomnivore/ollama.nvim )
2024-03-25 15:06:08 -04:00
- [ollero.nvim ](https://github.com/marco-souza/ollero.nvim )
2024-02-19 21:14:29 -05:00
- [ollama-chat.nvim ](https://github.com/gerazov/ollama-chat.nvim )
2023-11-20 10:39:14 -05:00
- [ogpt.nvim ](https://github.com/huynle/ogpt.nvim )
2023-11-09 15:53:24 -05:00
- [gptel Emacs client ](https://github.com/karthink/gptel )
2023-11-21 06:47:43 -05:00
- [Oatmeal ](https://github.com/dustinblackman/oatmeal )
2023-12-18 22:04:40 -05:00
- [cmdh ](https://github.com/pgibler/cmdh )
2024-03-25 15:08:33 -04:00
- [ooo ](https://github.com/npahlfer/ooo )
2024-04-22 20:10:34 -04:00
- [shell-pilot ](https://github.com/reid41/shell-pilot )
2024-02-19 23:13:03 -05:00
- [tenere ](https://github.com/pythops/tenere )
2024-02-03 18:40:50 -05:00
- [llm-ollama ](https://github.com/taketwo/llm-ollama ) for [Datasette's LLM CLI ](https://llm.datasette.io/en/stable/ ).
2024-03-25 15:05:04 -04:00
- [typechat-cli ](https://github.com/anaisbetts/typechat-cli )
2024-02-19 22:18:05 -05:00
- [ShellOracle ](https://github.com/djcopley/ShellOracle )
2024-03-25 14:53:26 -04:00
- [tlm ](https://github.com/yusufcanb/tlm )
2024-04-22 20:29:08 -04:00
- [podman-ollama ](https://github.com/ericcurtin/podman-ollama )
2024-06-05 17:13:39 -04:00
- [gollama ](https://github.com/sammcj/gollama )
2024-08-10 21:43:08 -04:00
- [Ollama eBook Summary ](https://github.com/cognitivetech/ollama-ebook-summary/ )
2023-10-28 17:02:13 -04:00
2024-09-05 01:30:19 -04:00
### Apple Vision Pro
- [Enchanted ](https://github.com/AugustDev/enchanted )
2023-12-11 18:05:10 -05:00
### Database
2024-04-15 18:35:29 -04:00
- [MindsDB ](https://github.com/mindsdb/mindsdb/blob/staging/mindsdb/integrations/handlers/ollama_handler/README.md ) (Connects Ollama models with nearly 200 data platforms and apps)
2024-04-01 11:17:37 -04:00
- [chromem-go ](https://github.com/philippgille/chromem-go/blob/v0.5.0/embed_ollama.go ) with [example ](https://github.com/philippgille/chromem-go/tree/v0.5.0/examples/rag-wikipedia-ollama )
2023-12-12 14:43:19 -05:00
2023-11-21 10:25:32 -05:00
### Package managers
2023-11-21 09:40:59 -05:00
2023-11-21 10:25:26 -05:00
- [Pacman ](https://archlinux.org/packages/extra/x86_64/ollama/ )
2024-09-05 12:58:14 -04:00
- [Gentoo ](https://github.com/gentoo/guru/tree/master/app-misc/ollama )
2024-02-19 22:05:14 -05:00
- [Helm Chart ](https://artifacthub.io/packages/helm/ollama-helm/ollama )
2024-05-09 14:10:24 -04:00
- [Guix channel ](https://codeberg.org/tusharhero/ollama-guix )
2024-08-29 12:45:35 -04:00
- [Nix package ](https://search.nixos.org/packages?channel=24.05&show=ollama&from=0&size=50&sort=relevance&type=packages&query=ollama )
- [Flox ](https://flox.dev/blog/ollama-part-one )
2023-11-21 09:40:59 -05:00
2023-10-28 17:02:13 -04:00
### Libraries
2023-11-08 02:52:31 -05:00
2023-09-26 02:44:53 -04:00
- [LangChain ](https://python.langchain.com/docs/integrations/llms/ollama ) and [LangChain.js ](https://js.langchain.com/docs/modules/model_io/models/llms/integrations/ollama ) with [example ](https://js.langchain.com/docs/use_cases/question_answering/local_retrieval_qa )
2024-07-30 21:40:09 -04:00
- [Firebase Genkit ](https://firebase.google.com/docs/genkit/plugins/ollama )
2023-11-20 10:35:07 -05:00
- [LangChainGo ](https://github.com/tmc/langchaingo/ ) with [example ](https://github.com/tmc/langchaingo/tree/main/examples/ollama-completion-example )
2024-02-22 14:09:08 -05:00
- [LangChain4j ](https://github.com/langchain4j/langchain4j ) with [example ](https://github.com/langchain4j/langchain4j-examples/tree/main/ollama-examples/src/main/java )
2024-06-08 20:29:36 -04:00
- [LangChainRust ](https://github.com/Abraxas-365/langchain-rust ) with [example ](https://github.com/Abraxas-365/langchain-rust/blob/main/examples/llm_ollama.rs )
2023-09-26 02:44:53 -04:00
- [LlamaIndex ](https://gpt-index.readthedocs.io/en/stable/examples/llm/ollama.html )
2023-10-28 17:02:13 -04:00
- [LiteLLM ](https://github.com/BerriAI/litellm )
2024-09-02 16:05:36 -04:00
- [OllamaFarm for Go ](https://github.com/presbrey/ollamafarm )
2023-10-28 17:18:38 -04:00
- [OllamaSharp for .NET ](https://github.com/awaescher/OllamaSharp )
2024-01-06 22:31:39 -05:00
- [Ollama for Ruby ](https://github.com/gbaptista/ollama-ai )
2023-11-04 20:12:18 -04:00
- [Ollama-rs for Rust ](https://github.com/pepperoni21/ollama-rs )
2024-06-11 14:15:05 -04:00
- [Ollama-hpp for C++ ](https://github.com/jmont-dev/ollama-hpp )
2024-09-03 16:08:50 -04:00
- [Ollama4j for Java ](https://github.com/ollama4j/ollama4j )
2023-11-06 14:27:02 -05:00
- [ModelFusion Typescript Library ](https://modelfusion.dev/integration/model-provider/ollama )
2023-11-11 17:41:42 -05:00
- [OllamaKit for Swift ](https://github.com/kevinhermawan/OllamaKit )
2023-11-13 14:50:42 -05:00
- [Ollama for Dart ](https://github.com/breitburg/dart-ollama )
2023-11-20 10:48:35 -05:00
- [Ollama for Laravel ](https://github.com/cloudstudio/ollama-laravel )
2023-12-19 14:04:52 -05:00
- [LangChainDart ](https://github.com/davidmigloz/langchain_dart )
2024-01-11 14:40:23 -05:00
- [Semantic Kernel - Python ](https://github.com/microsoft/semantic-kernel/tree/main/python/semantic_kernel/connectors/ai/ollama )
2024-01-18 16:38:32 -05:00
- [Haystack ](https://github.com/deepset-ai/haystack-integrations/blob/main/integrations/ollama.md )
2024-02-20 14:03:02 -05:00
- [Elixir LangChain ](https://github.com/brainlid/langchain )
2024-01-30 14:56:51 -05:00
- [Ollama for R - rollama ](https://github.com/JBGruber/rollama )
2024-05-07 12:52:30 -04:00
- [Ollama for R - ollama-r ](https://github.com/hauselin/ollama-r )
2024-02-13 14:40:44 -05:00
- [Ollama-ex for Elixir ](https://github.com/lebrunel/ollama-ex )
2024-02-22 13:12:27 -05:00
- [Ollama Connector for SAP ABAP ](https://github.com/b-tocs/abap_btocs_ollama )
2024-03-23 14:55:25 -04:00
- [Testcontainers ](https://testcontainers.com/modules/ollama/ )
2024-05-06 13:25:23 -04:00
- [Portkey ](https://portkey.ai/docs/welcome/integration-guides/ollama )
2024-05-09 12:39:05 -04:00
- [PromptingTools.jl ](https://github.com/svilupp/PromptingTools.jl ) with an [example ](https://svilupp.github.io/PromptingTools.jl/dev/examples/working_with_ollama )
2024-05-12 11:24:21 -04:00
- [LlamaScript ](https://github.com/Project-Llama/llamascript )
2024-09-04 14:19:41 -04:00
- [Gollm ](https://docs.gollm.co/examples/ollama-example )
2024-09-02 15:34:26 -04:00
- [Ollamaclient for Golang ](https://github.com/xyproto/ollamaclient )
2024-09-04 10:52:46 -04:00
- [High-level function abstraction in Go ](https://gitlab.com/tozd/go/fun )
2024-09-05 01:01:14 -04:00
- [Ollama PHP ](https://github.com/ArdaGnsrn/ollama-php )
2024-06-05 17:13:39 -04:00
2023-11-16 16:46:38 -05:00
### Mobile
2023-12-15 14:37:29 -05:00
- [Enchanted ](https://github.com/AugustDev/enchanted )
2024-01-02 09:47:08 -05:00
- [Maid ](https://github.com/Mobile-Artificial-Intelligence/maid )
2024-09-04 17:26:02 -04:00
- [ConfiChat ](https://github.com/1runeberg/confichat ) (Lightweight, standalone, multi-platform, and privacy focused LLM chat interface with optional encryption)
2023-11-16 16:46:38 -05:00
2023-11-08 02:52:31 -05:00
### Extensions & Plugins
2023-09-26 02:44:53 -04:00
- [Raycast extension ](https://github.com/MassimilianoPasquini97/raycast_ollama )
- [Discollama ](https://github.com/mxyng/discollama ) (Discord bot inside the Ollama discord channel)
- [Continue ](https://github.com/continuedev/continue )
- [Obsidian Ollama plugin ](https://github.com/hinterdupfinger/obsidian-ollama )
2023-11-07 12:58:13 -05:00
- [Logseq Ollama plugin ](https://github.com/omagdy7/ollama-logseq )
2024-03-04 04:18:10 -05:00
- [NotesOllama ](https://github.com/andersrex/notesollama ) (Apple Notes Ollama plugin)
2023-09-26 02:44:53 -04:00
- [Dagger Chatbot ](https://github.com/samalba/dagger-chatbot )
- [Discord AI Bot ](https://github.com/mekb-turtle/discord-ai-bot )
2023-12-03 14:19:55 -05:00
- [Ollama Telegram Bot ](https://github.com/ruecat/ollama-telegram )
2023-11-07 12:58:13 -05:00
- [Hass Ollama Conversation ](https://github.com/ej52/hass-ollama-conversation )
2023-11-20 10:36:47 -05:00
- [Rivet plugin ](https://github.com/abrenneke/rivet-plugin-ollama )
2023-11-22 14:32:30 -05:00
- [Obsidian BMO Chatbot plugin ](https://github.com/longy2k/obsidian-bmo-chatbot )
2024-03-25 15:07:19 -04:00
- [Cliobot ](https://github.com/herval/cliobot ) (Telegram bot with Ollama support)
2024-02-22 14:17:20 -05:00
- [Copilot for Obsidian plugin ](https://github.com/logancyang/obsidian-copilot )
2024-02-22 10:52:36 -05:00
- [Obsidian Local GPT plugin ](https://github.com/pfrankov/obsidian-local-gpt )
2024-01-23 13:29:10 -05:00
- [Open Interpreter ](https://docs.openinterpreter.com/language-model-setup/local-models/ollama )
2024-05-05 17:45:32 -04:00
- [Llama Coder ](https://github.com/ex3ndr/llama-coder ) (Copilot alternative using Ollama)
- [Ollama Copilot ](https://github.com/bernardo-bruning/ollama-copilot ) (Proxy that allows you to use ollama as a copilot like Github copilot)
2024-01-31 09:25:06 -05:00
- [twinny ](https://github.com/rjmacarthy/twinny ) (Copilot and Copilot chat alternative using Ollama)
2024-07-29 13:53:30 -04:00
- [Wingman-AI ](https://github.com/RussellCanfield/wingman-ai ) (Copilot code and chat alternative using Ollama and Hugging Face)
2024-02-20 14:03:58 -05:00
- [Page Assist ](https://github.com/n4ze3m/page-assist ) (Chrome Extension)
2024-03-25 14:56:42 -04:00
- [AI Telegram Bot ](https://github.com/tusharhero/aitelegrambot ) (Telegram bot using Ollama in backend)
2024-03-31 13:10:05 -04:00
- [AI ST Completion ](https://github.com/yaroslavyaroslav/OpenAI-sublime-text ) (Sublime Text 4 AI assistant plugin with Ollama support)
2024-04-22 20:14:20 -04:00
- [Discord-Ollama Chat Bot ](https://github.com/kevinthedang/discord-ollama ) (Generalized TypeScript Discord Bot w/ Tuning Documentation)
2024-05-16 16:55:14 -04:00
- [Discord AI chat/moderation bot ](https://github.com/rapmd73/Companion ) Chat/moderation bot written in python. Uses Ollama to create personalities.
2024-06-08 21:51:16 -04:00
- [Headless Ollama ](https://github.com/nischalj10/headless-ollama ) (Scripts to automatically install ollama client & models on any OS for apps that depends on ollama server)
2024-09-04 19:46:02 -04:00
- [vnc-lm ](https://github.com/jk011ru/vnc-lm ) (A containerized Discord bot with support for attachments and web links)
2024-09-05 01:17:34 -04:00
- [LSP-AI ](https://github.com/SilasMarvin/lsp-ai ) (Open-source language server for AI-powered functionality)
2024-04-17 13:48:14 -04:00
2024-06-05 17:13:39 -04:00
### Supported backends
2024-05-05 17:45:32 -04:00
- [llama.cpp ](https://github.com/ggerganov/llama.cpp ) project founded by Georgi Gerganov.