ollama/llm
2023-11-09 13:16:16 -08:00
..
llama.cpp restore building runner with AVX on by default (#900) 2023-10-27 12:13:44 -07:00
falcon.go starcoder 2023-10-02 19:56:51 -07:00
ggml.go ggufv3 2023-10-23 09:35:49 -07:00
gguf.go instead of static number of parameters for each model family, get the real number from the tensors (#1022) 2023-11-08 17:55:46 -08:00
llama.go skip gpu if less than 2GB VRAM are available (#1059) 2023-11-09 13:16:16 -08:00
llm.go default rope params to 0 for new models (#968) 2023-11-02 08:41:30 -07:00
starcoder.go starcoder 2023-10-02 19:56:51 -07:00
utils.go partial decode ggml bin for more info 2023-08-10 09:23:10 -07:00