Go to file
Stefan Teofanovic 59febfea52
Some checks failed
test / testbed (map[key:cached name:jaemk/cached node:false parallel:4 python:false]) (push) Has been cancelled
test / testbed (map[key:async-executor name:smol-rs/async-executor node:false parallel:4 python:false]) (push) Has been cancelled
test / testbed (map[key:constrandom name:tkaitchuck/constrandom node:false parallel:8 python:false]) (push) Has been cancelled
test / testbed (map[key:fastapi name:tiangolo/fastapi node:false parallel:8 python:true]) (push) Has been cancelled
test / testbed (map[key:helix name:helix-editor/helix node:false parallel:2 python:false]) (push) Has been cancelled
test / testbed (map[key:huggingface_hub name:huggingface/huggingface_hub node:false parallel:8 python:true]) (push) Has been cancelled
test / testbed (map[key:io-ts name:gcanti/io-ts node:true parallel:8 python:false]) (push) Has been cancelled
test / testbed (map[key:lancedb name:lancedb/lancedb node:false parallel:2 python:false]) (push) Has been cancelled
test / testbed (map[key:picklescan name:mmaitre314/picklescan node:false parallel:8 python:true]) (push) Has been cancelled
test / testbed (map[key:simple name:simple node:false parallel:8 python:false]) (push) Has been cancelled
test / testbed (map[key:starlette name:encode/starlette node:false parallel:8 python:true]) (push) Has been cancelled
test / testbed (map[key:zod name:colinhacks/zod node:true parallel:8 python:false]) (push) Has been cancelled
test / comment_results (push) Has been cancelled
fix issue 45 - use SystemTime instead of instant for unauthenticated warning (#100)
Co-authored-by: Teofanovic Stefan <stefan.teofanovic@hes-so.ch>
2024-07-08 10:53:51 +02:00
.cargo feat: add CI for release (#8) 2023-09-05 13:32:35 +02:00
.github fix(ci): update actions to use nodejs 20 (#96) 2024-05-24 17:33:05 +02:00
crates fix issue 45 - use SystemTime instead of instant for unauthenticated warning (#100) 2024-07-08 10:53:51 +02:00
xtask feat: add user agent (#20) 2023-09-21 14:32:21 +02:00
.gitignore feat: parallelise at hole level (#44) 2023-11-17 18:05:45 +01:00
Cargo.lock fix(ci): update actions to use nodejs 20 (#96) 2024-05-24 17:33:05 +02:00
Cargo.toml feat: testbed (#39) 2023-11-06 21:26:37 +01:00
LICENSE feat: add LICENSE (#1) 2023-08-10 22:50:24 +02:00
README.md docs(README): replace llama.cpp link with python bindings 2024-02-12 11:07:31 +01:00

llm-ls

Important

This is currently a work in progress, expect things to be broken!

llm-ls is a LSP server leveraging LLMs to make your development experience smoother and more efficient.

The goal of llm-ls is to provide a common platform for IDE extensions to be build on. llm-ls takes care of the heavy lifting with regards to interacting with LLMs so that extension code can be as lightweight as possible.

Features

Prompt

Uses the current file as context to generate the prompt. Can use "fill in the middle" or not depending on your needs.

It also makes sure that you are within the context window of the model by tokenizing the prompt.

Telemetry

Gathers information about requests and completions that can enable retraining.

Note that llm-ls does not export any data anywhere (other than setting a user agent when querying the model API), everything is stored in a log file (~/.cache/llm_ls/llm-ls.log) if you set the log level to info.

Completion

llm-ls parses the AST of the code to determine if completions should be multi line, single line or empty (no completion).

Multiple backends

llm-ls is compatible with Hugging Face's Inference API, Hugging Face's text-generation-inference, ollama and OpenAI compatible APIs, like the python llama.cpp server bindings.

Compatible extensions

Roadmap

  • support getting context from multiple files in the workspace
  • add suffix_percent setting that determines the ratio of # of tokens for the prefix vs the suffix in the prompt
  • add context window fill percent or change context_window to max_tokens
  • filter bad suggestions (repetitive, same as below, etc)
  • oltp traces ?