diff --git a/README.md b/README.md
index 769823eb5c8b9d5ff1d537d987dca539f27065e2..e247f27633deef17a0d1093f6833be8386fe0564 100644
--- a/README.md
+++ b/README.md
@@ -1,36 +1,32 @@
-Project to provision an OpenHPC cluster via Vagrant using the
+Project to provision an [OpenHPC](https://openhpc.community/) + [Open OnDemand](https://openondemand.org/) cluster via Vagrant using the
 CRI_XCBC (XSEDE basic cluster) Ansible provisioning framework.
 
 The Vagrantfile takes inspiration from the [vagrantcluster](https://github.com/cluening/vagrantcluster)
-project but is oriented toward deploying only a master node 
-and using standard OHPC tools to provision the cluster, and 
-therfore favors the CRI_XCBC approach to ansible scripts just 
+project but is oriented toward deploying only a master node
+and using standard OHPC tools to provision the cluster, and
+therfore favors the CRI_XCBC approach to ansible scripts just
 for the master.
 
 The Vagrantfile is stripped to the core (rather that carry all
-the cruft of a vagrant init).  It leverages work from a 
+the cruft of a vagrant init).  It leverages work from a
 [pilot project](https://gitlab.rc.uab.edu/ravi89/ohpc_vagrant)
 (primaryly the development of an updated centos 7.5 image)
-but prefers a clean repo slate.  
+but prefers a clean repo slate.
 
 ## Project Setup
 
-After cloning this project you need to initialize the submodule
-from with in the git repo
+Clone this project recursively to get the correct version for the
+CRI_XSEDE submodule to build the OpenHPC(ohpc) and Open OnDemand (ood) nodes
 ```
-git submodule init
-git submodule update
+git clone --recursive https://gitlab.rc.uab.edu/jpr/ohpc_vagrant.git
 ```
 
-Alternatively you can provide the `--recurse-submodules` command 
-during the initial clone.
-
 ## Cluster Setup
 
 After setting up the project above create your single node OpenHPC
 cluster with vagrant:
 ```
-vagrant up
+vagrant up ohpc
 ```
 
 The ansible config will bring the master node to the point where its
@@ -43,12 +39,12 @@ Create node c0 (choose whatever name makes sense, c0 matches the config):
 compute_create c0
 ```
 
-When prompted start node c0:
+When prompted start compute node c0:
 ```
 compute_start c0
 ```
 
-If you want to stop the node:
+If you want to stop the compute node:
 ```
 compute_stop c0
 ```
@@ -65,7 +61,7 @@ ipxe.iso in compute_create to match your local environment.
 
 ## Cluster Check
 
-After the `vagrant up` completes you can can log into the cluster with `vagrant ssh`.
+After the `vagrant up ohpc` completes you can can log into the cluster with `vagrant ssh ohpc`.
 
 To confirm the system is operational run `sinfo` and you should see the following text:
 ```
@@ -82,3 +78,28 @@ srun hostname
 This should return the name `c0`.
 
 With these tests confirmed you have a working OpenHPC cluster running slurm.
+
+## Boot the Open OnDemand node
+
+A primary function of this project is to provide a dev/test cluster for working
+with Open OnDemand.  After the cluster is up boot the ood node with:
+```
+vagrant up ood
+```
+
+This will provision the node and near the end of the provisioning provide several
+sudo commands that need to be run on the ohpc node to register the ood node
+with the cluster, ensuring data synchronization and slurm work.
+
+After the node is provisioned (or booted) you need to work around mount issue
+with NFS mounts and issue the `mount -a` command on the ood node:
+```
+vagrant ssh ood -c "sudo mount -a"
+```
+
+After this point you can connect to the web ui of the ood node, typically via
+(the port mapping may change in your local vagrant env):
+
+http://localhost/8080
+
+The default user name and password for the web UI is 'vagrant'.