Commit graph

2746 commits

Author SHA1 Message Date
jmorganca 63a453554d go mod tidy 2024-05-19 23:03:57 -07:00
Patrick Devine 105186aa17
add OLLAMA_NOHISTORY to turn off history in interactive mode (#4508) 2024-05-18 11:51:57 -07:00
Daniel Hiltgen ba04afc9a4
Merge pull request #4483 from dhiltgen/clean_exit
Don't return error on signal exit
2024-05-17 11:41:57 -07:00
Daniel Hiltgen 7e1e0086e7
Merge pull request #4482 from dhiltgen/integration_improvements
Skip max queue test on remote
2024-05-16 16:43:48 -07:00
Daniel Hiltgen 02b31c9dc8 Don't return error on signal exit 2024-05-16 16:25:38 -07:00
Daniel Hiltgen 7f2fbad736 Skip max queue test on remote
This test needs to be able to adjust the queue size down from
our default setting for a reliable test, so it needs to skip on
remote test execution mode.
2024-05-16 16:24:18 -07:00
Josh 5bece94509
Merge pull request #4463 from ollama/jyan/line-display
changed line display to be calculated with runewidth
2024-05-16 14:15:08 -07:00
Josh Yan 3d90156e99 removed comment 2024-05-16 14:12:03 -07:00
Rose Heart 5e46c5c435
Updating software for read me (#4467)
* Update README.md

Added chat/moderation bot to list of software.

* Update README.md

Fixed link error.
2024-05-16 13:55:14 -07:00
Jeffrey Morgan 583c1f472c
update llama.cpp submodule to 614d3b9 (#4414) 2024-05-16 13:53:09 -07:00
Josh Yan 26bfc1c443 go fmt'd cmd.go 2024-05-15 17:26:39 -07:00
Josh Yan 799aa9883c go fmt'd cmd.go 2024-05-15 17:24:17 -07:00
Michael Yang 84ed77cbd8
Merge pull request #4436 from ollama/mxyng/done-part
return on part done
2024-05-15 17:16:24 -07:00
Josh Yan c9e584fb90 updated double-width display 2024-05-15 16:45:24 -07:00
Josh Yan 17b1e81ca1 fixed width and word count for double spacing 2024-05-15 16:29:33 -07:00
Daniel Hiltgen 7e9a2da097
Merge pull request #4462 from dhiltgen/opt_out_build
Port cuda/rocm skip build vars to linux
2024-05-15 16:27:47 -07:00
Daniel Hiltgen c48c1d7c46 Port cuda/rocm skip build vars to linux
Windows already implements these, carry over to linux.
2024-05-15 15:56:43 -07:00
Patrick Devine d1692fd3e0
fix the cpu estimatedTotal memory + get the expiry time for loading models (#4461) 2024-05-15 15:43:16 -07:00
Daniel Hiltgen 5fa36a0833
Merge pull request #4459 from dhiltgen/sanitize_env_log
Sanitize the env var debug log
2024-05-15 14:58:55 -07:00
Daniel Hiltgen 853ae490e1 Sanitize the env var debug log
Only dump env vars we care about in the logs
2024-05-15 14:42:57 -07:00
Patrick Devine f2cf97d6f1
fix typo in modelfile generation (#4439) 2024-05-14 15:34:29 -07:00
Patrick Devine c344da4c5a
fix keepalive for non-interactive mode (#4438) 2024-05-14 15:17:04 -07:00
Michael Yang 0e331c7168
Merge pull request #4328 from ollama/mxyng/mem
count memory up to NumGPU if set by user
2024-05-14 13:47:44 -07:00
Michael Yang ac145f75ca return on part done 2024-05-14 13:04:30 -07:00
Patrick Devine a4b8d1f89a
re-add system context (#4435) 2024-05-14 11:38:20 -07:00
Ryo Machida 798b107f19
Fixed the API endpoint /api/tags when the model list is empty. (#4424)
* Fixed the API endpoint /api/tags to return {models: []} instead of {models: null} when the model list is empty.

* Update server/routes.go

---------

Co-authored-by: Jeffrey Morgan <jmorganca@gmail.com>
2024-05-14 11:18:10 -07:00
Daniel Hiltgen 6a1b471365
Merge pull request #4430 from dhiltgen/gpu_info
Remove VRAM convergence check for windows
2024-05-14 10:59:06 -07:00
Daniel Hiltgen ec231a7923 Remove VRAM convergence check for windows
The APIs we query are optimistic on free space, and windows pages
VRAM, so we don't have to wait to see reported usage recover on unload
2024-05-14 09:53:46 -07:00
Patrick Devine 7ca71a6b0f
don't abort when an invalid model name is used in /save (#4416) 2024-05-13 18:48:28 -07:00
Josh 7607e6e902
Merge pull request #4379 from WolfTheDeveloper/main
Update `LlamaScript` to point to new link from Legacy link.
2024-05-13 18:08:32 -07:00
Patrick Devine f1548ef62d
update the FAQ to be more clear about windows env variables (#4415) 2024-05-13 18:01:13 -07:00
Patrick Devine 6845988807
Ollama ps command for showing currently loaded models (#4327) 2024-05-13 17:17:36 -07:00
Josh 9eed4a90ce
Merge pull request #4411 from joshyan1/main
removed inconsistent punctuation
2024-05-13 15:30:45 -07:00
Josh Yan f8464785a6 removed inconsistencies 2024-05-13 14:50:52 -07:00
Michael Yang 1d359e737e typo 2024-05-13 14:18:34 -07:00
Michael Yang 50b9056e09 count memory up to NumGPU 2024-05-13 14:13:10 -07:00
Josh Yan 91a090a485 removed inconsistent punctuation 2024-05-13 14:08:22 -07:00
睡觉型学渣 9c76b30d72
Correct typos. (#4387)
* Correct typos.

* Correct typos.
2024-05-12 18:21:11 -07:00
Zander Lewis 93f19910c5
Update LlamaScript to point to new link.
Still used Legacy link.
2024-05-12 11:24:21 -04:00
jmorganca 4ec7445a6f Revert "use post token"
This reverts commit 0fec3525ad.
2024-05-11 22:19:14 -07:00
Michael Yang 0372c51f82
Merge pull request #4369 from ollama/mxyng/post-token
use post token
2024-05-11 19:29:14 -07:00
Michael Yang 0fec3525ad use post token 2024-05-11 19:13:16 -07:00
Jeffrey Morgan 41ba3017fd
Fix OpenAI finish_reason values when empty (#4368) 2024-05-11 15:31:41 -07:00
todashuta 8080fbce35
fix ollama create's usage string (#4362) 2024-05-11 14:47:49 -07:00
Michael Yang ec14f6ceda
case sensitive filepaths (#4366) 2024-05-11 14:12:36 -07:00
Daniel Hiltgen c60a086635
Merge pull request #4331 from dhiltgen/fix_unit
Fix envconfig unit test
2024-05-11 09:16:28 -07:00
jmorganca 92ca2cca95 Revert "only forward some env vars"
This reverts commit ce3b212d12.
2024-05-10 22:53:21 -07:00
Patrick Devine 1e1634daca
update go deps (#4324) 2024-05-10 21:39:27 -07:00
Daniel Hiltgen 824ee5446f Fix envconfig unit test 2024-05-10 16:49:48 -07:00
Daniel Hiltgen 879e2caf8c
Merge pull request #4329 from dhiltgen/zero_layers
Fall back to CPU runner with zero layers
2024-05-10 15:23:16 -07:00