diff --git a/docs/installation.md b/docs/installation.md
index 5be190e2979f6bf034a006f18beca92361c4bbfa..9bfc0d420b4b66d097aba719f432a60007a747f9 100644
--- a/docs/installation.md
+++ b/docs/installation.md
@@ -13,7 +13,11 @@ we recommend running with at least 64 GB of RAM.
 
 We provide installation instructions for a machine with an NVIDIA A100 80 GB GPU
 and a clean Ubuntu 22.04 LTS installation, and expect that these instructions
-should aid others with different setups.
+should aid others with different setups. If you are installing locally outside
+of a Docker container, please ensure CUDA, cuDNN, and JAX are correctly
+installed; the
+[JAX installation documentation](https://jax.readthedocs.io/en/latest/installation.html#nvidia-gpu)
+is a useful reference for this case.
 
 The instructions provided below describe how to:
 
diff --git a/docs/known_issues.md b/docs/known_issues.md
index 2f2372aaa0cf2d1f444475890d556f87fce01d6b..982a1754f2b7bbfe3706a0de2f41f3b84eb6ff63 100644
--- a/docs/known_issues.md
+++ b/docs/known_issues.md
@@ -1 +1,9 @@
 # Known Issues
+
+### Devices other than NVIDIA A100 or H100
+
+There are currently known unresolved numerical issues with using devices other
+than NVIDIA A100 and H100. For now, accuracy has only been validated for A100
+and H100 GPU device types. See
+[this Issue](https://github.com/google-deepmind/alphafold3/issues/59) for
+tracking.
diff --git a/docs/performance.md b/docs/performance.md
index 935de538554ef8c70450f7567aae7b3bfbb3346a..29a8a971ee6be803ed249c15d09ad73da8a61c11 100644
--- a/docs/performance.md
+++ b/docs/performance.md
@@ -87,12 +87,13 @@ AlphaFold 3 can run on inputs of size up to 4,352 tokens on a single NVIDIA A100
 While numerically accurate, this configuration will have lower throughput
 compared to the set up on the NVIDIA A100 (80 GB), due to less available memory.
 
-#### NVIDIA V100 (16 GB)
+#### Devices other than NVIDIA A100 or H100
 
-While you can run AlphaFold 3 on sequences up to 1,280 tokens on a single NVIDIA
-V100 using the flag `--flash_attention_implementation=xla` in
-`run_alphafold.py`, this configuration has not been tested for numerical
-accuracy or throughput efficiency, so please proceed with caution.
+There are currently known unresolved numerical issues with using devices other
+than NVIDIA A100 and H100. For now, accuracy has only been validated for A100
+and H100 GPU device types. See
+[this Issue](https://github.com/google-deepmind/alphafold3/issues/59) for
+tracking.
 
 ## Compilation Buckets