From 4b2b567103528428e014148b281430e0f47fcfc3 Mon Sep 17 00:00:00 2001 From: Augustin Zidek <augustinzidek@google.com> Date: Mon, 18 Nov 2024 16:21:14 +0000 Subject: [PATCH] Documentation updates - older GPUs and JAX installation PiperOrigin-RevId: 697634146 --- docs/installation.md | 6 +++++- docs/known_issues.md | 8 ++++++++ docs/performance.md | 11 ++++++----- 3 files changed, 19 insertions(+), 6 deletions(-) diff --git a/docs/installation.md b/docs/installation.md index 5be190e..9bfc0d4 100644 --- a/docs/installation.md +++ b/docs/installation.md @@ -13,7 +13,11 @@ we recommend running with at least 64 GB of RAM. We provide installation instructions for a machine with an NVIDIA A100 80 GB GPU and a clean Ubuntu 22.04 LTS installation, and expect that these instructions -should aid others with different setups. +should aid others with different setups. If you are installing locally outside +of a Docker container, please ensure CUDA, cuDNN, and JAX are correctly +installed; the +[JAX installation documentation](https://jax.readthedocs.io/en/latest/installation.html#nvidia-gpu) +is a useful reference for this case. The instructions provided below describe how to: diff --git a/docs/known_issues.md b/docs/known_issues.md index 2f2372a..982a175 100644 --- a/docs/known_issues.md +++ b/docs/known_issues.md @@ -1 +1,9 @@ # Known Issues + +### Devices other than NVIDIA A100 or H100 + +There are currently known unresolved numerical issues with using devices other +than NVIDIA A100 and H100. For now, accuracy has only been validated for A100 +and H100 GPU device types. See +[this Issue](https://github.com/google-deepmind/alphafold3/issues/59) for +tracking. diff --git a/docs/performance.md b/docs/performance.md index 935de53..29a8a97 100644 --- a/docs/performance.md +++ b/docs/performance.md @@ -87,12 +87,13 @@ AlphaFold 3 can run on inputs of size up to 4,352 tokens on a single NVIDIA A100 While numerically accurate, this configuration will have lower throughput compared to the set up on the NVIDIA A100 (80 GB), due to less available memory. -#### NVIDIA V100 (16 GB) +#### Devices other than NVIDIA A100 or H100 -While you can run AlphaFold 3 on sequences up to 1,280 tokens on a single NVIDIA -V100 using the flag `--flash_attention_implementation=xla` in -`run_alphafold.py`, this configuration has not been tested for numerical -accuracy or throughput efficiency, so please proceed with caution. +There are currently known unresolved numerical issues with using devices other +than NVIDIA A100 and H100. For now, accuracy has only been validated for A100 +and H100 GPU device types. See +[this Issue](https://github.com/google-deepmind/alphafold3/issues/59) for +tracking. ## Compilation Buckets -- GitLab