diff --git a/docs/import.md b/docs/import.md index db0a53cb..6c924892 100644 --- a/docs/import.md +++ b/docs/import.md @@ -43,7 +43,6 @@ Ollama supports a set of model architectures, with support for more coming soon: - Llama & Mistral - Falcon & RW -- GPT-NeoX - BigCode To view a model's architecture, check the `config.json` file in its HuggingFace repo. You should see an entry under `architectures` (e.g. `LlamaForCausalLM`). @@ -184,9 +183,6 @@ python convert.py # FalconForCausalLM python convert-falcon-hf-to-gguf.py -# GPTNeoXForCausalLM -python convert-gptneox-hf-to-gguf.py - # GPTBigCodeForCausalLM python convert-starcoder-hf-to-gguf.py ```