diff --git a/examples/privategpt/README.md b/examples/privategpt/README.md index f2f147ba..f739df2b 100644 --- a/examples/privategpt/README.md +++ b/examples/privategpt/README.md @@ -2,11 +2,11 @@ https://github.com/jmorganca/ollama/assets/3325447/20cf8ec6-ff25-42c6-bdd8-9be594e3ce1b -> Note: this example is a simplified version of [PrivateGPT](https://github.com/imartinez/privateGPT) that works with Llama 2 Uncensored. All credit for PrivateGPT goes to Iván Martínez who is the creator of it. +> Note: this example is a slightly modified version of PrivateGPT using models such as Llama 2 Uncensored. All credit for PrivateGPT goes to Iván Martínez who is the creator of it, and you can find his GitHub repo [here](https://github.com/imartinez/privateGPT). ### Setup -Optionally set up a virtual environment: +Set up a virtual environment (optional): ``` python3 -m venv .venv @@ -32,7 +32,7 @@ mkdir source_documents curl https://d18rn0p25nwr6d.cloudfront.net/CIK-0001813756/975b3e9b-268e-4798-a9e4-2a9a7c92dc10.pdf -o source_documents/wework.pdf ``` -### Ingesting data +### Ingesting files ```shell python ingest.py @@ -51,7 +51,7 @@ Using embedded DuckDB with persistence: data will be stored in: db Ingestion complete! You can now run privateGPT.py to query your documents ``` -### Ask Questions! +### Ask questions ```shell python privateGPT.py @@ -69,7 +69,7 @@ ollama pull llama2:13b MODEL=llama2:13b python privateGPT.py ``` -## Adding your own data +## Adding more files Put any and all your files into the `source_documents` directory