ollama/llm
Daniel Hiltgen 16f4603b67 Improve maintainability of Radeon card list
This moves the list of AMD GPUs to an easier to maintain list which
should make it easier to update over time.
2024-01-03 15:16:56 -08:00
..
llama.cpp Improve maintainability of Radeon card list 2024-01-03 15:16:56 -08:00
dynamic_shim.c Switch windows build to fully dynamic 2024-01-02 15:36:16 -08:00
dynamic_shim.h Refactor how we augment llama.cpp 2024-01-02 15:35:55 -08:00
ext_server_common.go fix: relay request opts to loaded llm prediction (#1761) 2024-01-03 12:01:42 -05:00
ext_server_default.go fix: relay request opts to loaded llm prediction (#1761) 2024-01-03 12:01:42 -05:00
ext_server_windows.go Switch windows build to fully dynamic 2024-01-02 15:36:16 -08:00
ggml.go deprecate ggml 2023-12-19 09:05:46 -08:00
gguf.go remove per-model types 2023-12-11 09:40:21 -08:00
llama.go fix: relay request opts to loaded llm prediction (#1761) 2024-01-03 12:01:42 -05:00
llm.go Revamp the dynamic library shim 2023-12-20 14:45:57 -08:00
shim_darwin.go Switch windows build to fully dynamic 2024-01-02 15:36:16 -08:00
shim_ext_server.go fix: relay request opts to loaded llm prediction (#1761) 2024-01-03 12:01:42 -05:00
shim_ext_server_linux.go Switch windows build to fully dynamic 2024-01-02 15:36:16 -08:00
shim_ext_server_windows.go Switch windows build to fully dynamic 2024-01-02 15:36:16 -08:00
utils.go partial decode ggml bin for more info 2023-08-10 09:23:10 -07:00