diff --git a/README.md b/README.md index 2b91beea..6e33ab4e 100644 --- a/README.md +++ b/README.md @@ -10,7 +10,7 @@ Get up and running with large language models locally. ### macOS -[Download](https://ollama.ai/download/Ollama-darwin.zip) +[Download](https://ollama.com/download/Ollama-darwin.zip) ### Windows @@ -19,7 +19,7 @@ Coming soon! For now, you can install Ollama on Windows via WSL2. ### Linux & WSL2 ``` -curl https://ollama.ai/install.sh | sh +curl -fsSL https://ollama.com/install.sh | sh ``` [Manual install instructions](https://github.com/jmorganca/ollama/blob/main/docs/linux.md) @@ -35,7 +35,7 @@ The official [Ollama Docker image](https://hub.docker.com/r/ollama/ollama) `olla ## Quickstart -To run and chat with [Llama 2](https://ollama.ai/library/llama2): +To run and chat with [Llama 2](https://ollama.com/library/llama2): ``` ollama run llama2 @@ -43,7 +43,7 @@ ollama run llama2 ## Model library -Ollama supports a list of open-source models available on [ollama.ai/library](https://ollama.ai/library 'ollama model library') +Ollama supports a list of open-source models available on [ollama.com/library](https://ollama.com/library 'ollama model library') Here are some example open-source models that can be downloaded: diff --git a/docs/README.md b/docs/README.md index fd5b902f..c6939661 100644 --- a/docs/README.md +++ b/docs/README.md @@ -10,7 +10,7 @@ Create new models or modify models already in the library using the Modelfile. L Import models using source model weights found on Hugging Face and similar sites by referring to the **[Import Documentation](./import.md)**. -Installing on Linux in most cases is easy using the script on Ollama.ai. To get more detail about the install, including CUDA drivers, see the **[Linux Documentation](./linux.md)**. +Installing on Linux in most cases is easy using the script on [ollama.com/download](ollama.com/download). To get more detail about the install, including CUDA drivers, see the **[Linux Documentation](./linux.md)**. Many of our users like the flexibility of using our official Docker Image. Learn more about using Docker with Ollama using the **[Docker Documentation](https://hub.docker.com/r/ollama/ollama)**. diff --git a/docs/import.md b/docs/import.md index 9813cd1c..114e59a2 100644 --- a/docs/import.md +++ b/docs/import.md @@ -123,9 +123,9 @@ ollama run example "What is your favourite condiment?" Publishing models is in early alpha. If you'd like to publish your model to share with others, follow these steps: -1. Create [an account](https://ollama.ai/signup) +1. Create [an account](https://ollama.com/signup) 2. Run `cat ~/.ollama/id_ed25519.pub` to view your Ollama public key. Copy this to the clipboard. -3. Add your public key to your [Ollama account](https://ollama.ai/settings/keys) +3. Add your public key to your [Ollama account](https://ollama.com/settings/keys) Next, copy your model to your username's namespace: @@ -139,7 +139,7 @@ Then push the model: ollama push /example ``` -After publishing, your model will be available at `https://ollama.ai//example`. +After publishing, your model will be available at `https://ollama.com//example`. ## Quantization reference diff --git a/docs/linux.md b/docs/linux.md index abd63320..29110b05 100644 --- a/docs/linux.md +++ b/docs/linux.md @@ -3,9 +3,11 @@ ## Install Install Ollama running this one-liner: + > + ```bash -curl https://ollama.ai/install.sh | sh +curl -fsSL https://ollama.com/install.sh | sh ``` ## Manual install @@ -15,7 +17,7 @@ curl https://ollama.ai/install.sh | sh Ollama is distributed as a self-contained binary. Download it to a directory in your PATH: ```bash -sudo curl -L https://ollama.ai/download/ollama-linux-amd64 -o /usr/bin/ollama +sudo curl -L https://ollama.com/download/ollama-linux-amd64 -o /usr/bin/ollama sudo chmod +x /usr/bin/ollama ``` @@ -75,13 +77,13 @@ sudo systemctl start ollama Update ollama by running the install script again: ```bash -curl https://ollama.ai/install.sh | sh +curl -fsSL https://ollama.com/install.sh | sh ``` Or by downloading the ollama binary: ```bash -sudo curl -L https://ollama.ai/download/ollama-linux-amd64 -o /usr/bin/ollama +sudo curl -L https://ollama.com/download/ollama-linux-amd64 -o /usr/bin/ollama sudo chmod +x /usr/bin/ollama ``` @@ -110,6 +112,7 @@ sudo rm $(which ollama) ``` Remove the downloaded models and Ollama service user and group: + ```bash sudo rm -r /usr/share/ollama sudo userdel ollama diff --git a/docs/modelfile.md b/docs/modelfile.md index 6d6ac152..b92af782 100644 --- a/docs/modelfile.md +++ b/docs/modelfile.md @@ -67,13 +67,13 @@ To use this: More examples are available in the [examples directory](../examples). -### `Modelfile`s in [ollama.ai/library][1] +### `Modelfile`s in [ollama.com/library][1] -There are two ways to view `Modelfile`s underlying the models in [ollama.ai/library][1]: +There are two ways to view `Modelfile`s underlying the models in [ollama.com/library][1]: - Option 1: view a details page from a model's tags page: - 1. Go to a particular model's tags (e.g. https://ollama.ai/library/llama2/tags) - 2. Click on a tag (e.g. https://ollama.ai/library/llama2:13b) + 1. Go to a particular model's tags (e.g. https://ollama.com/library/llama2/tags) + 2. Click on a tag (e.g. https://ollama.com/library/llama2:13b) 3. Scroll down to "Layers" - Note: if the [`FROM` instruction](#from-required) is not present, it means the model was created from a local file @@ -225,4 +225,4 @@ MESSAGE assistant yes - the **`Modelfile` is not case sensitive**. In the examples, uppercase instructions are used to make it easier to distinguish it from arguments. - Instructions can be in any order. In the examples, the `FROM` instruction is first to keep it easily readable. -[1]: https://ollama.ai/library +[1]: https://ollama.com/library diff --git a/docs/tutorials/nvidia-jetson.md b/docs/tutorials/nvidia-jetson.md index 85cf741c..2d3adb98 100644 --- a/docs/tutorials/nvidia-jetson.md +++ b/docs/tutorials/nvidia-jetson.md @@ -17,7 +17,7 @@ Prerequisites: Here are the steps: -- Install Ollama via standard Linux command (ignore the 404 error): `curl https://ollama.ai/install.sh | sh` +- Install Ollama via standard Linux command (ignore the 404 error): `curl https://ollama.com/install.sh | sh` - Stop the Ollama service: `sudo systemctl stop ollama` - Start Ollama serve in a tmux session called ollama_jetson and reference the CUDA libraries path: `tmux has-session -t ollama_jetson 2>/dev/null || tmux new-session -d -s ollama_jetson 'LD_LIBRARY_PATH=/usr/local/cuda/lib64 ollama serve'` diff --git a/examples/jupyter-notebook/ollama.ipynb b/examples/jupyter-notebook/ollama.ipynb index d57e2057..bee353cb 100644 --- a/examples/jupyter-notebook/ollama.ipynb +++ b/examples/jupyter-notebook/ollama.ipynb @@ -8,7 +8,7 @@ "outputs": [], "source": [ "# Download and run the Ollama Linux install script\n", - "!curl https://ollama.ai/install.sh | sh\n", + "!curl -fsSL https://ollama.com/install.sh | sh\n", "!command -v systemctl >/dev/null && sudo systemctl stop ollama" ] }, diff --git a/examples/kubernetes/README.md b/examples/kubernetes/README.md index cb5f39f9..c522ba76 100644 --- a/examples/kubernetes/README.md +++ b/examples/kubernetes/README.md @@ -2,28 +2,28 @@ ## Prerequisites -- Ollama: https://ollama.ai/download +- Ollama: https://ollama.com/download - Kubernetes cluster. This example will use Google Kubernetes Engine. ## Steps 1. Create the Ollama namespace, daemon set, and service - ```bash - kubectl apply -f cpu.yaml - ``` + ```bash + kubectl apply -f cpu.yaml + ``` 1. Port forward the Ollama service to connect and use it locally - ```bash - kubectl -n ollama port-forward service/ollama 11434:80 - ``` + ```bash + kubectl -n ollama port-forward service/ollama 11434:80 + ``` 1. Pull and run a model, for example `orca-mini:3b` - ```bash - ollama run orca-mini:3b - ``` + ```bash + ollama run orca-mini:3b + ``` ## (Optional) Hardware Acceleration diff --git a/examples/langchain-python-rag-websummary/README.md b/examples/langchain-python-rag-websummary/README.md index 9ccc54cc..3f3b9873 100644 --- a/examples/langchain-python-rag-websummary/README.md +++ b/examples/langchain-python-rag-websummary/README.md @@ -1,6 +1,6 @@ # LangChain Web Summarization -This example summarizes the website, [https://ollama.ai/blog/run-llama2-uncensored-locally](https://ollama.ai/blog/run-llama2-uncensored-locally) +This example summarizes the website, [https://ollama.com/blog/run-llama2-uncensored-locally](https://ollama.com/blog/run-llama2-uncensored-locally) ## Running the Example diff --git a/examples/langchain-python-rag-websummary/main.py b/examples/langchain-python-rag-websummary/main.py index 2bb25d75..cd2ef47f 100644 --- a/examples/langchain-python-rag-websummary/main.py +++ b/examples/langchain-python-rag-websummary/main.py @@ -2,7 +2,7 @@ from langchain.llms import Ollama from langchain.document_loaders import WebBaseLoader from langchain.chains.summarize import load_summarize_chain -loader = WebBaseLoader("https://ollama.ai/blog/run-llama2-uncensored-locally") +loader = WebBaseLoader("https://ollama.com/blog/run-llama2-uncensored-locally") docs = loader.load() llm = Ollama(model="llama2") diff --git a/examples/python-loganalysis/readme.md b/examples/python-loganalysis/readme.md index 828e8de2..60c57217 100644 --- a/examples/python-loganalysis/readme.md +++ b/examples/python-loganalysis/readme.md @@ -40,13 +40,13 @@ You are a log file analyzer. You will receive a set of lines from a log file for """ ``` -This model is available at https://ollama.ai/mattw/loganalyzer. You can customize it and add to your own namespace using the command `ollama create -f ` then `ollama push `. +This model is available at https://ollama.com/mattw/loganalyzer. You can customize it and add to your own namespace using the command `ollama create -f ` then `ollama push `. Then loganalysis.py scans all the lines in the given log file and searches for the word 'error'. When the word is found, the 10 lines before and after are set as the prompt for a call to the Generate API. ```python data = { - "prompt": "\n".join(error_logs), + "prompt": "\n".join(error_logs), "model": "mattw/loganalyzer" } ``` diff --git a/examples/typescript-mentors/README.md b/examples/typescript-mentors/README.md index 5ab1cc55..c3ce9c82 100644 --- a/examples/typescript-mentors/README.md +++ b/examples/typescript-mentors/README.md @@ -29,9 +29,9 @@ You can also add your own character to be chosen at random when you ask a questi ```bash ollama pull stablebeluga2:70b-q4_K_M ``` - + 2. Create a new character: - + ```bash npm run charactergen "Lorne Greene" ``` @@ -41,15 +41,15 @@ You can also add your own character to be chosen at random when you ask a questi 3. Now you can create a model with this command: ```bash - ollama create /lornegreene -f lornegreene/Modelfile + ollama create /lornegreene -f lornegreene/Modelfile ``` - `YourNamespace` is whatever name you set up when you signed up at [https://ollama.ai/signup](https://ollama.ai/signup). + `username` is whatever name you set up when you signed up at [https://ollama.com/signup](https://ollama.com/signup). -4. To add this to your mentors, you will have to update the code as follows. On line 8 of `mentors.ts`, add an object to the array, replacing `` with the namespace you used above. +4. To add this to your mentors, you will have to update the code as follows. On line 8 of `mentors.ts`, add an object to the array, replacing `` with the username you used above. ```bash - {ns: "", char: "Lorne Greene"} + {ns: "", char: "Lorne Greene"} ``` ## Review the Code diff --git a/scripts/install.sh b/scripts/install.sh index e9e2ebf2..7d63a307 100644 --- a/scripts/install.sh +++ b/scripts/install.sh @@ -61,7 +61,7 @@ if [ -n "$NEEDS" ]; then fi status "Downloading ollama..." -curl --fail --show-error --location --progress-bar -o $TEMP_DIR/ollama "https://ollama.ai/download/ollama-linux-$ARCH" +curl --fail --show-error --location --progress-bar -o $TEMP_DIR/ollama "https://ollama.com/download/ollama-linux-$ARCH" for BINDIR in /usr/local/bin /usr/bin /bin; do echo $PATH | grep -q $BINDIR && break || continue