ollama/llm
Daniel Hiltgen e9ce91e9a6 Load dynamic cpu lib on windows
On linux, we link the CPU library in to the Go app and fall back to it
when no GPU match is found. On windows we do not link in the CPU library
so that we can better control our dependencies for the CLI.  This fixes
the logic so we correctly fallback to the dynamic CPU library
on windows.
2024-01-04 08:41:41 -08:00
..
llama.cpp Load dynamic cpu lib on windows 2024-01-04 08:41:41 -08:00
dynamic_shim.c Switch windows build to fully dynamic 2024-01-02 15:36:16 -08:00
dynamic_shim.h Refactor how we augment llama.cpp 2024-01-02 15:35:55 -08:00
ext_server_common.go fix: relay request opts to loaded llm prediction (#1761) 2024-01-03 12:01:42 -05:00
ext_server_default.go fix: relay request opts to loaded llm prediction (#1761) 2024-01-03 12:01:42 -05:00
ext_server_windows.go Load dynamic cpu lib on windows 2024-01-04 08:41:41 -08:00
ggml.go deprecate ggml 2023-12-19 09:05:46 -08:00
gguf.go remove per-model types 2023-12-11 09:40:21 -08:00
llama.go fix: relay request opts to loaded llm prediction (#1761) 2024-01-03 12:01:42 -05:00
llm.go Load dynamic cpu lib on windows 2024-01-04 08:41:41 -08:00
shim_darwin.go Switch windows build to fully dynamic 2024-01-02 15:36:16 -08:00
shim_ext_server.go Fix CPU only builds 2024-01-03 16:08:34 -08:00
shim_ext_server_linux.go Switch windows build to fully dynamic 2024-01-02 15:36:16 -08:00
shim_ext_server_windows.go Load dynamic cpu lib on windows 2024-01-04 08:41:41 -08:00
utils.go partial decode ggml bin for more info 2023-08-10 09:23:10 -07:00