
NVIDIA A30
- NVIDIA Ampere GPU architecture
- Compute-optimized GPU
- 3584 NVIDIA CUDA Cores
- 224 NVIDIA Tensor Cores
- 24GB HBM2 memory with ECC
- UP to 933 GB/s memory bandwidth
- Max. power consumption: 165W
- Graphics bus: PCI-E 4.0 x16
- Thermal solution: Passive
Product out of stock
Free shipping from €300
Promocja cenowa na model HDR-15-5
Product intended for professional use only
NVIDIA A30
Description
Bring accelerated performance to every enterprise workload with NVIDIA A30 Tensor Core GPUs. With NVIDIA Ampere architecture Tensor Cores and Multi-Instance GPU (MIG), it delivers speedups securely across diverse workloads, including AI inference at scale and high-performance computing (HPC) applications. By combining fast memory bandwidth and low-power consumption in a PCIe form factor—optimal for mainstream servers—A30 enables an elastic data center and delivers maximum value for enterprises.
Technical Specification
| FP64 | 5.2 teraFLOPS |
|---|---|
| FP64 Tensor Core | 10.3 teraFLOPS |
| FP32 | 10.3 teraFLOPS |
| TF32 Tensor Core | 82 teraFLOPS | 165 teraFLOPS* |
| BFLOAT16 Tensor Core | 165 teraFLOPS | 330 teraFLOPS* |
| FP16 Tensor Core | 165 teraFLOPS | 330 teraFLOPS* |
| INT8 Tensor Core | 330 TOPS | 661 TOPS* |
| INT4 Tensor Core | 661 TOPS | 1321 TOPS* |
| Media Engines | 1 Optical Flow Accelerator (OFA) 1 JPEG Decoder (NVJPEG) 4 Video Decoders (NVDEC) |
| GPU Memory | 24 GB HBM2 |
| GPU Memory Bandwidth | 933 GB/s |
| Interconnect | PCIe Gen4: 64 GB/s Third-gen NVLink: 200 GB/s** |
| Form Factor | Dual-slot, full-height, full-length (FHFL) |
| Max Thermal Design Power (TDP) | 165 W |
| Multi-Instance GPU (MIG) | 4 GPU instances @ 6 GB each 2 GPU instances @ 12 GB each 1 GPU instance @ 24 GB |
| Virtual GPU (vGPU) Software Support | NVIDIA AI Enterprise, NVIDIA Virtual Compute Server |
* With sparsity
** NVLink Bridge for up to two GPUs

