Gered
8cde5ce43f
upgrade ollama backend support to use the chat completions api
2024-11-10 18:52:20 -05:00
Gered
29cec6445c
add missing string -> LanguageId conversions
...
forgot to add "php" before.
also add extra variations for existing Bash and Yaml language types
2024-11-10 12:52:12 -05:00
Gered
880a729f48
add cmake, dockerfile, php, toml and yaml tree-sitter support
2024-11-10 12:50:50 -05:00
Gered
25b4794d3e
update tree-sitter dependencies
...
removed tree-sitter-kotlin and tree-sitter-objc as there doesn't seem
to be versions of these compatible with the latest tree-sitter
2024-11-09 17:47:22 -05:00
Gered
9748c0fd60
pass api_token option to ollama backend
2024-11-03 17:56:39 -05:00
Gered
5abb3c86b4
add rust-toolchain.toml to set rust 1.79.0 to fix build issues
...
there are some dependency updates necessary to use 1.80+, but i'm not
really interested in fixes these problems myself, so i'll just sidestep
that problem with this
2024-11-03 17:14:10 -05:00
Stefan Teofanovic
59febfea52
fix issue 45 - use SystemTime instead of instant for unauthenticated warning ( #100 )
...
test / testbed (map[key:cached name:jaemk/cached node:false parallel:4 python:false]) (push) Has been cancelled
test / testbed (map[key:async-executor name:smol-rs/async-executor node:false parallel:4 python:false]) (push) Has been cancelled
test / testbed (map[key:constrandom name:tkaitchuck/constrandom node:false parallel:8 python:false]) (push) Has been cancelled
test / testbed (map[key:fastapi name:tiangolo/fastapi node:false parallel:8 python:true]) (push) Has been cancelled
test / testbed (map[key:helix name:helix-editor/helix node:false parallel:2 python:false]) (push) Has been cancelled
test / testbed (map[key:huggingface_hub name:huggingface/huggingface_hub node:false parallel:8 python:true]) (push) Has been cancelled
test / testbed (map[key:io-ts name:gcanti/io-ts node:true parallel:8 python:false]) (push) Has been cancelled
test / testbed (map[key:lancedb name:lancedb/lancedb node:false parallel:2 python:false]) (push) Has been cancelled
test / testbed (map[key:picklescan name:mmaitre314/picklescan node:false parallel:8 python:true]) (push) Has been cancelled
test / testbed (map[key:simple name:simple node:false parallel:8 python:false]) (push) Has been cancelled
test / testbed (map[key:starlette name:encode/starlette node:false parallel:8 python:true]) (push) Has been cancelled
test / testbed (map[key:zod name:colinhacks/zod node:true parallel:8 python:false]) (push) Has been cancelled
test / comment_results (push) Has been cancelled
Co-authored-by: Teofanovic Stefan <stefan.teofanovic@hes-so.ch>
2024-07-08 10:53:51 +02:00
Luc Georges
479401f3a5
fix(ci): update actions to use nodejs 20 ( #96 )
2024-05-24 17:33:05 +02:00
Luc Georges
98a12630e7
feat: add backend url route completion ( #95 )
2024-05-24 16:30:21 +02:00
Luc Georges
0e95bb3589
feat: add llama.cpp
backend ( #94 )
...
* feat: add `llama.cpp` backend
* fix(ci): install stable toolchain instead of nightly
* fix(ci): use different model
---------
Co-authored-by: flopes <FredericoPerimLopes@users.noreply.github.com>
2024-05-24 10:17:48 +02:00
Jeremy Elalouf
078d4c7af2
Feature/multiple encodings handled ( #88 )
...
* feat: support added for multiple encodings
Actually, the only supported encoding was UTF-8.
In cases where the editor sends updates encoded in UTF-16,
the mirror of the user's workspace goes out of sync, leading to a server crash.
Furthermore, UTF-16 is the default and mandatory encoding for the protocol,
and servers must support it.
Therefore, it is imperative to ensure its support.
Now, to avoid any unnecessary conversion, the server negotiates the encoding
with the client during the initialization phase.
This allows the server to choose its preferred encoding in cases where it is available.
See: https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#textDocuments
* style: format added code using rustfmt
* feat: cargo clippy warnings fixed
* test: add some unit tests for the new `apply_content_change` function
* feat: review comments fixed
Use of the `document::PositionEncodingKind` enum everywhere it's
possible and TryFrom implemented on it.
Also, LspError are not used anymore.
* feat: review comments fixed, mostly style.
2024-03-07 13:50:52 +01:00
Luc Georges
386ed53183
test: fix invalid deserialization ( #85 )
...
* test: fix invalid deserialization
* feat: install node or python only when needed
* feat: when action is rerun, run testbed with logs in debug
* feat(ci): update actions to node 20
* feat: log stdout & stderr from setup & build cmd as debug
* fix: bump helix revision
* fix: use helix fork for the time being
* fix(ci): install missing setuptools
* fix(ci): revert back to python 3.10
* fix(ci): disable lance
2024-02-19 17:25:42 +01:00
Luc Georges
0d035901f0
fix: nameless file crash ( #84 )
2024-02-19 12:08:32 +01:00
Luc Georges
50b62bd107
fix: AcceptCompletionParams
-> RejectCompletionParams
( #83 )
2024-02-19 11:15:27 +01:00
Luc Georges
0b75e5dd60
refactor: cleanup unused code ( #82 )
2024-02-19 10:27:01 +01:00
Luc Georges
fe1f6aab47
fix: always set return_full_text
to false for better UX ( #78 )
2024-02-13 11:02:50 +01:00
Luc Georges
4437c0c8a6
fix: deserialize url
null value w/ default if backend: huggingface
( #75 )
2024-02-12 20:07:59 +01:00
Luc Georges
86043ce3af
fix: change visiblity of internal functions
2024-02-12 15:58:56 +01:00
Luc Georges
8926969265
docs(README): replace llama.cpp link with python bindings
2024-02-12 11:07:31 +01:00
Luc Georges
4891468c1a
feat: update backend & model parameter ( #74 )
...
* feat: update backend & model parameter
* fix: add `stream: false` in request body for ollama & openai
2024-02-09 18:42:41 +01:00
Luc Georges
92fc885503
fix: helix editor build crash ( #73 )
2024-02-08 22:43:56 +01:00
Luc Georges
54b25a8731
feat: add socket connection ( #72 )
2024-02-07 15:06:58 +01:00
Luc Georges
455b085c96
refactor: error handling ( #71 )
2024-02-07 12:34:17 +01:00
Luc Georges
a9831d5720
refactor: adaptor -> backend ( #70 )
2024-02-06 21:26:53 +01:00
Markus Hennerbichler
1499fd6cbf
Only warn of rate-limits when using HF endpoint ( #58 )
...
* Only warn of rate-limits when using HF endpoint
Co-authored-by: Luc Georges <McPatate@users.noreply.github.com>
2024-02-05 21:05:52 +01:00
Markus Hennerbichler
c9a44e591c
Fix handling of end-of-file ( #60 )
...
* Fix underflow on empty line
* Update crates/llm-ls/src/main.rs
Co-authored-by: Luc Georges <McPatate@users.noreply.github.com>
* Update crates/llm-ls/src/main.rs
Co-authored-by: Luc Georges <McPatate@users.noreply.github.com>
---------
Co-authored-by: Luc Georges <McPatate@users.noreply.github.com>
2024-02-05 18:37:24 +01:00
Markus Hennerbichler
16606e5371
Fix off-by-1 error when removing at end ( #61 )
2024-01-28 23:25:14 +01:00
Markus Hennerbichler
c2fbac20ee
Ignore documents with "output" scheme ( #59 )
2024-01-28 23:21:19 +01:00
Markus Hennerbichler
ec2072a621
Fix off-by-1 in prompt creation ( #64 )
2024-01-28 23:19:15 +01:00
Markus Hennerbichler
a5f2e87315
Account for FIM tokens in prompt ( #62 )
2024-01-28 23:16:01 +01:00
Markus Hennerbichler
f40e8cc6ea
Convert generation parameters to snake_case for TGI adapter ( #65 )
2024-01-28 23:04:38 +01:00
Noah Baldwin
585ea3aae8
feat: add adaptors for various backends ( #40 )
...
* ollama
* tgi
* api-inference
* OpenAI based APIs
2024-01-02 18:56:01 +01:00
Diego ROJAS
2a433cdf75
Disable ropey unicode_lines feature ( #50 )
...
With the current configuration, Ropey recognises more EOL sequences than the Language Server Protocol. This mismatch can lead to errors when trying to maintain a mirror of the user's documents as the llm-ls might have more lines.
See: https://docs.rs/ropey/1.6.0/ropey/index.html#a-note-about-line-breaks
See: https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#textDocuments
2023-12-15 16:11:05 +01:00
Johan von Forstner
b1d0eb4ffe
add support for Kotlin language ( #52 )
2023-12-15 15:01:40 +01:00
Luc Georges
6c4e0e4176
feat: parallelise at hole level ( #44 )
...
* feat: parallelise at hole level
* fix(ci): move strategy to testbed job
* feat: output json results file
* fix(ci): install jq
* fix(ci): add missing `runs-on`
* fix(ci): add dependency to testbed job
* fix(ci): invalid artifact key name
* fix(ci): add missing i in fastapi key
* feat(ci): make CI run different # of threads per repo
* fix(ci): results.json not in markdown
* feat: round output values
* fix: avoid creating zombie processes
* fix: check on word instead of line
* feat: recreate holes for long CI
2023-11-17 18:05:45 +01:00
Luc Georges
3ad64a32df
feat!: get completions camel case ( #48 )
...
* feat!: make API camelCase
* fix(testbed): update llm-ls API
* feat: sync docs with incremental change
2023-11-15 13:36:27 +01:00
Luc Georges
59185abfd9
feat!: make API camelCase ( #46 )
...
API had inconsistencies in case
2023-11-09 12:00:39 +01:00
Luc Georges
f58085b812
fix(ci): outdated lance revision ( #43 )
2023-11-07 13:24:22 +01:00
Luc Georges
c7affd0da9
feat: testbed
( #39 )
2023-11-06 21:26:37 +01:00
Luc Georges
4aacd7087b
feat: improve logging ( #34 )
...
* feat: imporve logging
* feat: bump to `0.4.0`
* docs: add features section in README
2023-10-18 11:12:37 +02:00
Luc Georges
fdd55dd5c1
fix: new dependencies break CI ( #31 )
...
* fix: install `arm-linux-gnueabihf-g++` as suggested by err
* fix: restrict visibility of modules
* fix: add `g++-aarch64-linux-gnu`
2023-10-11 21:12:18 +02:00
Luc Georges
cdbf76fd43
feat: improve suggestions based on AST ( #30 )
...
* feat: improve suggestions based on AST
* feat: bump version to `0.3.0`
2023-10-11 19:33:57 +02:00
Luc Georges
b6d6c6cccd
feat: send warning to client when unauthenticated ( #27 )
2023-10-10 14:43:12 +02:00
Luc Georges
f90bb13fbd
docs: add llm-intellij
link ( #26 )
2023-10-04 16:51:00 +02:00
Luc Georges
bad31bd017
docs: mark llm-vscode as compatible ( #23 )
2023-09-25 15:41:47 +02:00
Luc Georges
fbaf98203f
fix: don't use tokenizer on config error ( #22 )
2023-09-25 15:10:29 +02:00
Luc Georges
787f2a1a26
feat: improve tokenizer config ( #21 )
...
* feat: improve tokenizer config
* fix: add untagged decorator to `TokenizerConfig`
* feat: bump version to `0.2.0`
2023-09-21 17:57:19 +02:00
Luc Georges
eeb443feb3
feat: add user agent ( #20 )
...
* feat: add user agent
* feat: add mock_server to repo
* feat: bump to `0.1.1`
2023-09-21 14:32:21 +02:00
Luc Georges
7f9c7855d5
feat: add tokens to clear ( #19 )
2023-09-20 19:24:38 +02:00
Luc Georges
5378a67ce8
fix: advertise correct version to client ( #16 )
2023-09-15 18:24:14 +02:00