ollama/gpu
Daniel Hiltgen 1961a81f03 Set corret CUDA minimum compute capability version
If you attempt to run the current CUDA build on compute capability 5.2
cards, you'll hit the following failure:
cuBLAS error 15 at ggml-cuda.cu:7956: the requested functionality is not supported
2024-01-09 11:28:24 -08:00
..
gpu.go Set corret CUDA minimum compute capability version 2024-01-09 11:28:24 -08:00
gpu_darwin.go Offload layers to GPU based on new model size estimates (#1850) 2024-01-08 16:42:00 -05:00
gpu_info.h Fix windows system memory lookup 2024-01-03 08:50:01 -08:00
gpu_info_cpu.c Fix windows system memory lookup 2024-01-03 08:50:01 -08:00
gpu_info_cuda.c Detect very old CUDA GPUs and fall back to CPU 2024-01-06 21:40:29 -08:00
gpu_info_cuda.h Detect very old CUDA GPUs and fall back to CPU 2024-01-06 21:40:29 -08:00
gpu_info_rocm.c Fix windows system memory lookup 2024-01-03 08:50:01 -08:00
gpu_info_rocm.h Adapted rocm support to cgo based llama.cpp 2023-12-19 09:05:46 -08:00
gpu_test.go Fix windows system memory lookup 2024-01-03 08:50:01 -08:00
types.go Fix windows system memory lookup 2024-01-03 08:50:01 -08:00