diff --git a/README.md b/README.md index 44ee7052..84996372 100644 --- a/README.md +++ b/README.md @@ -29,7 +29,8 @@ curl https://ollama.ai/install.sh | sh ### Docker -See the official [Docker image](https://hub.docker.com/r/ollama/ollama). +The official [Ollama Docker image `ollama/ollama`](https://hub.docker.com/r/ollama/ollama) +is available on Docker Hub. ## Quickstart @@ -178,8 +179,7 @@ ollama list Install `cmake` and `go`: ``` -brew install cmake -brew install go +brew install cmake go ``` Then generate dependencies and build: @@ -203,9 +203,8 @@ Finally, in a separate shell, run a model: ## REST API -See the [API documentation](docs/api.md) for all endpoints. - -Ollama has an API for running and managing models. For example to generate text from a model: +Ollama has a REST API for running and managing models. +For example, to generate text from a model: ``` curl -X POST http://localhost:11434/api/generate -d '{ @@ -214,6 +213,8 @@ curl -X POST http://localhost:11434/api/generate -d '{ }' ``` +See the [API documentation](./docs/api.md) for all endpoints. + ## Community Integrations - [LangChain](https://python.langchain.com/docs/integrations/llms/ollama) and [LangChain.js](https://js.langchain.com/docs/modules/model_io/models/llms/integrations/ollama) with [example](https://js.langchain.com/docs/use_cases/question_answering/local_retrieval_qa)