diff --git a/README.md b/README.md index 7eece163..32d404a0 100644 --- a/README.md +++ b/README.md @@ -35,10 +35,10 @@ The official [Ollama Docker image](https://hub.docker.com/r/ollama/ollama) `olla ## Quickstart -To run and chat with [Llama 2](https://ollama.com/library/llama2): +To run and chat with [Llama 3](https://ollama.com/library/llama3): ``` -ollama run llama2 +ollama run llama3 ``` ## Model library @@ -49,7 +49,8 @@ Here are some example models that can be downloaded: | Model | Parameters | Size | Download | | ------------------ | ---------- | ----- | ------------------------------ | -| Llama 2 | 7B | 3.8GB | `ollama run llama2` | +| Llama 3 | 8B | 4.7GB | `ollama run llama3` | +| Llama 3 | 70B | 40GB | `ollama run llama3:70b` | | Mistral | 7B | 4.1GB | `ollama run mistral` | | Dolphin Phi | 2.7B | 1.6GB | `ollama run dolphin-phi` | | Phi-2 | 2.7B | 1.7GB | `ollama run phi` | @@ -97,16 +98,16 @@ See the [guide](docs/import.md) on importing models for more information. ### Customize a prompt -Models from the Ollama library can be customized with a prompt. For example, to customize the `llama2` model: +Models from the Ollama library can be customized with a prompt. For example, to customize the `llama3` model: ``` -ollama pull llama2 +ollama pull llama3 ``` Create a `Modelfile`: ``` -FROM llama2 +FROM llama3 # set the temperature to 1 [higher is more creative, lower is more coherent] PARAMETER temperature 1 @@ -141,7 +142,7 @@ ollama create mymodel -f ./Modelfile ### Pull a model ``` -ollama pull llama2 +ollama pull llama3 ``` > This command can also be used to update a local model. Only the diff will be pulled. @@ -149,13 +150,13 @@ ollama pull llama2 ### Remove a model ``` -ollama rm llama2 +ollama rm llama3 ``` ### Copy a model ``` -ollama cp llama2 my-llama2 +ollama cp llama3 my-llama2 ``` ### Multiline input @@ -179,7 +180,7 @@ The image features a yellow smiley face, which is likely the central focus of th ### Pass in prompt as arguments ``` -$ ollama run llama2 "Summarize this file: $(cat README.md)" +$ ollama run llama3 "Summarize this file: $(cat README.md)" Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications. ``` @@ -226,7 +227,7 @@ Next, start the server: Finally, in a separate shell, run a model: ``` -./ollama run llama2 +./ollama run llama3 ``` ## REST API @@ -237,7 +238,7 @@ Ollama has a REST API for running and managing models. ``` curl http://localhost:11434/api/generate -d '{ - "model": "llama2", + "model": "llama3", "prompt":"Why is the sky blue?" }' ``` @@ -246,7 +247,7 @@ curl http://localhost:11434/api/generate -d '{ ``` curl http://localhost:11434/api/chat -d '{ - "model": "mistral", + "model": "llama3", "messages": [ { "role": "user", "content": "why is the sky blue?" } ]