From 82bf0b0daa031babd912e7721207af424f041a61 Mon Sep 17 00:00:00 2001 From: William Warriner <wwarr@uab.edu> Date: Fri, 11 Oct 2024 14:11:36 -0500 Subject: [PATCH] fix readme --- README.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 7474178..34718c5 100644 --- a/README.md +++ b/README.md @@ -34,14 +34,13 @@ It is recommended to use an HPC Desktop job in the Interactive Apps section of < 1. `conda env create --file environment.yml` 1. Obtain the rendered UAB RC Documentation pages by running `pull-site.sh`. 1. Setup `ollama` by running `setup-ollama.sh`. -1. Start the `ollama` server by running `./ollama serve`. ### Once-per-job Setup 1. Load the Miniforge module with `module load Miniforge3`. 1. Start the `ollama` server application with `./ollama serve`. -### To Run +### To Run the Example 1. Run the Jupyter notebook `main.ipynb`. - At time of writing, the Documentation pages are enough data that it takes about 7-10 minutes to generate the embeddings. Please be patient. @@ -55,7 +54,7 @@ The models are supplied by `ollama`. - LLM: <https://ollama.com/library/llama3.1> - Embedding: <https://ollama.com/library/bge-m3> -## Using other versions and models +## Using Other Versions and Models Newer versions of `ollama` are compressed as `.tar.gz` files on the GitHub releases page (<https://github.com/ollama/ollama/releases>). When modifying the `setup-ollama.sh` script to use these models, you will need to take this into account. -- GitLab