ollama/scripts
Daniel Hiltgen 2fcd41ef81 Fail fast on WSL1 while allowing on WSL2
This prevents users from accidentally installing on WSL1 with instructions
guiding how to upgrade their WSL instance to version 2.  Once running WSL2
if you have an NVIDIA card, you can follow their instructions to set up
GPU passthrough and run models on the GPU.  This is not possible on WSL1.
2024-01-03 16:02:32 -08:00
..
build.sh update build_darwin.sh 2023-09-29 11:29:23 -07:00
build_darwin.sh Add cgo implementation for llama.cpp 2023-12-19 09:05:46 -08:00
build_docker.sh use docker build in build scripts 2024-01-02 19:32:54 -05:00
build_linux.sh use docker build in build scripts 2024-01-02 19:32:54 -05:00
build_remote.py Adapted rocm support to cgo based llama.cpp 2023-12-19 09:05:46 -08:00
install.sh Fail fast on WSL1 while allowing on WSL2 2024-01-03 16:02:32 -08:00
publish.sh darwin build script 2023-07-28 12:23:27 -07:00
push_docker.sh use docker build in build scripts 2024-01-02 19:32:54 -05:00
setup_integration_tests.sh Guard integration tests with a tag 2023-12-22 16:33:27 -08:00