diff --git a/README.md b/README.md index 648826bf..8d1da607 100644 --- a/README.md +++ b/README.md @@ -17,7 +17,7 @@ Get up and running with large language models locally. ### Windows -Coming soon! +Coming soon! For now, you can install Ollama on Windows via WSL2. ### Linux & WSL2 diff --git a/docs/faq.md b/docs/faq.md index 28631a79..e6771db8 100644 --- a/docs/faq.md +++ b/docs/faq.md @@ -154,3 +154,11 @@ docker run -d -e HTTPS_PROXY=https://my.proxy.example.com -p 11434:11434 ollama- The Ollama Docker container can be configured with GPU acceleration in Linux or Windows (with WSL2). This requires the [nvidia-container-toolkit](https://github.com/NVIDIA/nvidia-container-toolkit). See [ollama/ollama](https://hub.docker.com/r/ollama/ollama) for more details. GPU acceleration is not available for Docker Desktop in macOS due to the lack of GPU passthrough and emulation. + +## Why is networking slow in WSL2 on Windows 10? + +This can impact both installing Ollama, as well as downloading models. + +Open `Control Panel > Networking and Internet > View network status and tasks` and click on `Change adapter settings` on the left panel. Find the `vEthernel (WSL)` adapter, right click and select `Properties`. +Click on `Configure` and open the `Advanced` tab. Search through each of the properties until you find `Large Send Offload Version 2 (IPv4)` and `Large Send Offload Version 2 (IPv6)`. *Disable* both of these +properties.