From 60323e08057d36b617f11d3c4958d342a44d0342 Mon Sep 17 00:00:00 2001 From: Shubham <25881429+shoebham@users.noreply.github.com> Date: Tue, 4 Jun 2024 10:50:48 +0530 Subject: [PATCH] add embed model command and fix question invoke (#4766) * add embed model command and fix question invoke * Update docs/tutorials/langchainpy.md Co-authored-by: Kim Hallberg * Update docs/tutorials/langchainpy.md --------- Co-authored-by: Kim Hallberg Co-authored-by: Jeffrey Morgan --- docs/tutorials/langchainpy.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/docs/tutorials/langchainpy.md b/docs/tutorials/langchainpy.md index 9a1bca0d..06543a07 100644 --- a/docs/tutorials/langchainpy.md +++ b/docs/tutorials/langchainpy.md @@ -45,7 +45,7 @@ all_splits = text_splitter.split_documents(data) ``` It's split up, but we have to find the relevant splits and then submit those to the model. We can do this by creating embeddings and storing them in a vector database. We can use Ollama directly to instantiate an embedding model. We will use ChromaDB in this example for a vector database. `pip install chromadb` - +We also need to pull embedding model: `ollama pull nomic-embed-text` ```python from langchain.embeddings import OllamaEmbeddings from langchain.vectorstores import Chroma @@ -68,7 +68,8 @@ The next thing is to send the question and the relevant parts of the docs to the ```python from langchain.chains import RetrievalQA qachain=RetrievalQA.from_chain_type(ollama, retriever=vectorstore.as_retriever()) -qachain.invoke({"query": question}) +res = qachain.invoke({"query": question}) +print(res['result']) ``` The answer received from this chain was: