This website requires JavaScript.
Explore
Help
Sign In
gered
/
ollama
Watch
1
Star
0
Fork
You've already forked ollama
0
Code
Issues
Pull requests
Actions
Packages
Projects
Releases
Wiki
Activity
afa8d6e9d5
ollama
/
llm
/
ggml_test.go
2 lines
12 B
Go
Raw
Normal View
History
Unescape
Escape
llm: speed up gguf decoding by a lot (#5246) Previously, some costly things were causing the loading of GGUF files and their metadata and tensor information to be VERY slow: * Too many allocations when decoding strings * Hitting disk for each read of each key and value, resulting in a not-okay amount of syscalls/disk I/O. The show API is now down to 33ms from 800ms+ for llama3 on a macbook pro m3. This commit also prevents collecting large arrays of values when decoding GGUFs (if desired). When such keys are encountered, their values are null, and are encoded as such in JSON. Also, this fixes a broken test that was not encoding valid GGUF.
2024-06-25 00:47:52 -04:00
package
llm
Reference in a new issue
Copy permalink