Skip to content
Snippets Groups Projects
Commit 4b2b5671 authored by Augustin Zidek's avatar Augustin Zidek
Browse files

Documentation updates - older GPUs and JAX installation

PiperOrigin-RevId: 697634146
parent b618d2d5
No related branches found
No related tags found
1 merge request!1Cloned AlphaFold 3 repo into this one
...@@ -13,7 +13,11 @@ we recommend running with at least 64 GB of RAM. ...@@ -13,7 +13,11 @@ we recommend running with at least 64 GB of RAM.
We provide installation instructions for a machine with an NVIDIA A100 80 GB GPU We provide installation instructions for a machine with an NVIDIA A100 80 GB GPU
and a clean Ubuntu 22.04 LTS installation, and expect that these instructions and a clean Ubuntu 22.04 LTS installation, and expect that these instructions
should aid others with different setups. should aid others with different setups. If you are installing locally outside
of a Docker container, please ensure CUDA, cuDNN, and JAX are correctly
installed; the
[JAX installation documentation](https://jax.readthedocs.io/en/latest/installation.html#nvidia-gpu)
is a useful reference for this case.
The instructions provided below describe how to: The instructions provided below describe how to:
......
# Known Issues # Known Issues
### Devices other than NVIDIA A100 or H100
There are currently known unresolved numerical issues with using devices other
than NVIDIA A100 and H100. For now, accuracy has only been validated for A100
and H100 GPU device types. See
[this Issue](https://github.com/google-deepmind/alphafold3/issues/59) for
tracking.
...@@ -87,12 +87,13 @@ AlphaFold 3 can run on inputs of size up to 4,352 tokens on a single NVIDIA A100 ...@@ -87,12 +87,13 @@ AlphaFold 3 can run on inputs of size up to 4,352 tokens on a single NVIDIA A100
While numerically accurate, this configuration will have lower throughput While numerically accurate, this configuration will have lower throughput
compared to the set up on the NVIDIA A100 (80 GB), due to less available memory. compared to the set up on the NVIDIA A100 (80 GB), due to less available memory.
#### NVIDIA V100 (16 GB) #### Devices other than NVIDIA A100 or H100
While you can run AlphaFold 3 on sequences up to 1,280 tokens on a single NVIDIA There are currently known unresolved numerical issues with using devices other
V100 using the flag `--flash_attention_implementation=xla` in than NVIDIA A100 and H100. For now, accuracy has only been validated for A100
`run_alphafold.py`, this configuration has not been tested for numerical and H100 GPU device types. See
accuracy or throughput efficiency, so please proceed with caution. [this Issue](https://github.com/google-deepmind/alphafold3/issues/59) for
tracking.
## Compilation Buckets ## Compilation Buckets
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment