ollama/llm
Gered 47c356a6cd
Some checks failed
release / build-darwin (push) Has been cancelled
release / generate-windows-cpu (push) Has been cancelled
release / generate-windows-rocm (push) Has been cancelled
release / generate-windows-cuda (map[url:https://developer.download.nvidia.com/compute/cuda/11.3.1/local_installers/cuda_11.3.1_465.89_win10.exe version:11]) (push) Has been cancelled
release / generate-windows-cuda (map[url:https://developer.download.nvidia.com/compute/cuda/12.4.0/local_installers/cuda_12.4.0_551.61_windows.exe version:12]) (push) Has been cancelled
release / build-windows (push) Has been cancelled
release / build-linux-amd64 (push) Has been cancelled
release / build-linux-arm64 (push) Has been cancelled
release / build-container-image (linux) (push) Has been cancelled
release / build-container-image (linux-arm64) (push) Has been cancelled
release / merge (push) Has been cancelled
release / build-container-image-rocm (push) Has been cancelled
release / release (push) Has been cancelled
disable avx while still allowing gpu support
as per discussion for this issue and the most recent comment on how
to fix this issue, at least temporarily, here:

https://github.com/ollama/ollama/issues/2187#issuecomment-2262876198
2024-09-22 13:27:17 -04:00
..
ext_server runner: Flush pending responses before returning 2024-09-11 16:39:32 -07:00
generate disable avx while still allowing gpu support 2024-09-22 13:27:17 -04:00
llama.cpp@8962422b1c llm: update llama.cpp commit to 8962422 (#6618) 2024-09-03 21:12:39 -04:00
patches llm: add solar pro (preview) (#6846) 2024-09-17 18:11:26 -07:00
filetype.go Add support for IQ1_S, IQ3_S, IQ2_S, IQ4_XS. IQ4_NL (#4322) 2024-05-23 13:21:49 -07:00
ggla.go update convert test to check result data 2024-07-31 10:59:38 -07:00
ggml.go Merge pull request #6260 from ollama/mxyng/mem 2024-09-05 13:22:08 -07:00
ggml_test.go llm: speed up gguf decoding by a lot (#5246) 2024-06-24 21:47:52 -07:00
gguf.go add conversion for microsoft phi 3 mini/medium 4k, 128 2024-08-12 15:13:29 -07:00
llm.go lint 2024-08-01 17:06:06 -07:00
llm_darwin.go Optimize container images for startup (#6547) 2024-09-12 12:10:30 -07:00
llm_linux.go Optimize container images for startup (#6547) 2024-09-12 12:10:30 -07:00
llm_windows.go Optimize container images for startup (#6547) 2024-09-12 12:10:30 -07:00
memory.go Improve logging on GPU too small (#6666) 2024-09-06 08:29:36 -07:00
memory_test.go llama3.1 2024-08-21 11:49:31 -07:00
server.go Optimize container images for startup (#6547) 2024-09-12 12:10:30 -07:00
status.go Catch one more error log 2024-08-05 09:28:07 -07:00