diff --git a/README.md b/README.md index 0bbf6139..ccdb25d8 100644 --- a/README.md +++ b/README.md @@ -74,10 +74,10 @@ ollama.search("llama-7b") ## Future CLI -In the future, there will be an easy CLI for running models +In the future, there will be an `ollama` CLI for running models on servers, in containers or for local development environments. ``` -ollama run huggingface.co/thebloke/llama-7b-ggml +ollama generaate huggingface.co/thebloke/llama-7b-ggml > Downloading [================> ] 66.67% (2/3) 30.2MB/s ``` diff --git a/desktop/README.md b/desktop/README.md index 56077302..1dabab51 100644 --- a/desktop/README.md +++ b/desktop/README.md @@ -1,16 +1,18 @@ # Desktop -The Ollama desktop experience +The Ollama desktop app ## Running +In the background run the `ollama.py` [development](../docs/development.md) server: + +``` +python ../ollama.py serve --port 5001 +``` + +Then run the desktop app: + ``` npm install npm start ``` - -## Packaging - -``` -npm run package -``` diff --git a/docs/development.md b/docs/development.md index a0b37e11..16405863 100644 --- a/docs/development.md +++ b/docs/development.md @@ -14,14 +14,6 @@ Put your model in `models/` and run: python3 ollama.py serve ``` -To run the app: - -``` -cd desktop -npm install -npm start -``` - ## Building If using Apple silicon, you need a Python version that supports arm64: