diff --git a/README.md b/README.md
index 7474178287d520b25dcf26aeef52999d648acefe..34718c5152e3710f77ceb6e608dc766d9fea519a 100644
--- a/README.md
+++ b/README.md
@@ -34,14 +34,13 @@ It is recommended to use an HPC Desktop job in the Interactive Apps section of <
     1. `conda env create --file environment.yml`
 1. Obtain the rendered UAB RC Documentation pages by running `pull-site.sh`.
 1. Setup `ollama` by running `setup-ollama.sh`.
-1. Start the `ollama` server by running `./ollama serve`.
 
 ### Once-per-job Setup
 
 1. Load the Miniforge module with `module load Miniforge3`.
 1. Start the `ollama` server application with `./ollama serve`.
 
-### To Run
+### To Run the Example
 
 1. Run the Jupyter notebook `main.ipynb`.
     - At time of writing, the Documentation pages are enough data that it takes about 7-10 minutes to generate the embeddings. Please be patient.
@@ -55,7 +54,7 @@ The models are supplied by `ollama`.
 - LLM: <https://ollama.com/library/llama3.1>
 - Embedding: <https://ollama.com/library/bge-m3>
 
-## Using other versions and models
+## Using Other Versions and Models
 
 Newer versions of `ollama` are compressed as `.tar.gz` files on the GitHub releases page (<https://github.com/ollama/ollama/releases>). When modifying the `setup-ollama.sh` script to use these models, you will need to take this into account.