ollama/gpu
Daniel Hiltgen f07f8b7a9e Harden for zero detected GPUs
At least with the ROCm libraries, its possible to have the library
present with zero GPUs.  This fix avoids a divide by zero bug in llm.go
when we try to calculate GPU memory with zero GPUs.
2024-01-28 13:13:10 -08:00
..
cpu_common.go Mechanical switch from log to slog 2024-01-18 14:12:57 -08:00
gpu.go Harden for zero detected GPUs 2024-01-28 13:13:10 -08:00
gpu_darwin.go Fix up the CPU fallback selection 2024-01-11 15:27:06 -08:00
gpu_info.h Ignore AMD integrated GPUs 2024-01-26 09:21:35 -08:00
gpu_info_cpu.c calculate overhead based number of gpu devices (#1875) 2024-01-09 15:53:33 -05:00
gpu_info_cuda.c Fix crash on cuda ml init failure 2024-01-26 09:18:33 -08:00
gpu_info_cuda.h More logging for gpu management 2024-01-24 10:32:36 -08:00
gpu_info_rocm.c Update gpu_info_rocm.c 2024-01-26 22:08:27 -08:00
gpu_info_rocm.h More logging for gpu management 2024-01-24 10:32:36 -08:00
gpu_test.go Merge pull request #1819 from dhiltgen/multi_variant 2024-01-11 14:00:48 -08:00
types.go Support multiple variants for a given llm lib type 2024-01-10 17:27:51 -08:00