Find more information about the provisioning limitations, which are primarily determined by the physical hardware constraints.
Get Started
These constraints are relevant to GPU instances that match the specific GPU model. As the number of GPUs increases, vCPU per GPU, RAM per GPU, and storage per GPU also increase proportionally.
| GPU | Max GPUs per Instance | vCPUs per GPU | RAM per GPU | Storage per GPU | Public Network |
|---|---|---|---|---|---|
| H100 80GB SXM5 | 8 | 28 | 192GB | 4TB+ | 100Gbps |
| H100 80GB NVLINK | 8 | 28 | 192GB | 4TB+ | 100Gbps |
| H100 80GB PCIE | 8 | 26 | 200GB | 1TB+ | 100Gbps |
| A100 80GB NVLINK | 8 | 28 | 192GB | 4TB+ | 100Gbps |
| A100 80GB PCIE | 8 | 24 | 192GB | 1TB+ | 100Gbps |
| L40 | 8 | 28 | 128GB | 512GB+ | 50Gbps |
| RTX 4090 | 4 | 16 | 64GB | 512GB+ | 10Gbps |
| RTX A6000 | 8 | 14 | 46GB | 512GB+ | 10Gbps |