NVIDIA A100 80GB

  • GPU architecture NVIDIA Ampere
  • GPU accelerator optimized for AI, HPC and data centers
  • 6912 NVIDIA CUDA cores for parallel computing
  • 432 Tensor cores to accelerate AI model training and inference
  • 80 GB of HBM2e memory with ECC to support large models and datasets
  • Memory bandwidth up to 1,935 GB/s
  • Interface PCIe 4.0 x16 for high communication bandwidth
  • Passive cooling suitable for servers and computing systems
  • Maximum power consumption: 300 W
Product out of stock

Free shipping from €300

Promocja cenowa na model HDR-15-5

Product intended for professional use only
NVIDIA A100 80GB

NVIDIA A100 80GB

Description

NVIDIA A100 80GB GPU accelerator for AI and data centres

The NVIDIA A100 Tensor Core GPU delivers exceptional computing performance for the most demanding data centre environments. Designed based on the NVIDIA Ampere architecture, the A100 chip is the foundation of modern computing platforms used in artificial intelligence, data analytics and high performance computing (HPC).

The NVIDIA A100 80GB model offers a huge GPU memory space and one of the highest memory bandwidths available in server accelerators. This enables it to handle the most demanding AI models, huge datasets and complex computational simulations. The GPU delivers up to 20 times higher performance compared to previous generations of data centre accelerators.

NVIDIA A100 80GB is one of the most widely used GPU accelerators in AI systems, HPC clusters and computing infrastructure for next-generation AI models.

NVIDIA Ampere and Tensor Cores Architecture

The A100GPU features 6912 CUDA cores and 432 Tensor cores to accelerate matrix operations used in machine learning and deep learning. This enables the accelerator to excel at both training AI models and inferencing them in production environments.

Additionally, 80 GB of HBM2e memory with ECC error correction and a bandwidth in excess of 1.9 TB/s enable the processing of massive data sets and very large artificial intelligence models.

 

MIG - multiple GPU instances in a single accelerator

One of the key NVIDIA A100 technologies is Multi-Instance GPU (MIG), which allows a single accelerator to be split into up to seven independent GPU instances. Each instance can handle separate computing workloads, ensuring resource isolation and maximum hardware utilisation.

With MIG technology, GPUs can be used efficiently in cloud environments, AI platforms and data centres supporting multiple concurrent workloads.

 

Accelerator for Modern AI Systems

The NVIDIA A100 is widely used in artificial intelligence infrastructures where massive computing power and high memory bandwidth are required. This accelerator is used for, among other things:

  • training large language models LLM (Large Language Models)
  • multimodal systems VLM (Vision Language Models)
  • platform Retrieval Augmented Generation (RAG)
  • analysis of of large datasets
  • inference of AI models in production environments
  • recommendation and analytics systems

With high performance Tensor Cores and massive memory bandwidth, NVIDIA A100 enables significant reductions in AI model training time and acceleration in production applications.

 

Data centre and HPC applications

The NVIDIA A100 accelerator is designed to work in high-density computing server systems. Passive cooling and a PCIe 4.0 x16 interface enable integration into server platforms and GPU clusters used in:

  • high performance computing (HPC)
  • scientific simulation
  • data centre data analysis
  • artificial intelligence systems
  • enterprise AI platforms
  • cloud GPU infrastructure
.

Technical Specification

  A100 80GB PCIe A100 80GB SXM
FP64 9.7 TFLOPS 9.7 TFLOPS
FP64 Tensor Core 19.5 TFLOPS 19.5 TFLOPS
FP32 19.5 TFLOPS 19.5 TFLOPS
Tensor Float 32 (TF32) 156 TFLOPS | 312 TFLOPS* 156 TFLOPS | 312 TFLOPS*
BFLOAT16 Tensor Core 312 TFLOPS | 624 TFLOPS* 312 TFLOPS | 624 TFLOPS*
FP16 Tensor Core 312 TFLOPS | 624 TFLOPS* 312 TFLOPS | 624 TFLOPS*
INT8 Tensor Core 624 TOPS | 1248 TOPS* 624 TOPS | 1248 TOPS*
GPU Memory 80GB HBM2e 80GB HBM2e
GPU Memory Bandwidth 1,935 GB/s 2,039 GB/s
Max Thermal Design Power (TDP) 300W 400W ***
Multi-Instance GPU (MIG) Up to 7 MIGs @ 10GB Up to 7 MIGs @ 10GB
Form Factor PCIe — Dual-slot air-cooled or single-slot liquid-cooled SXM
Interconnect NVIDIA® NVLink® Bridge for 2 GPUs: 600 GB/s **
PCIe Gen4: 64 GB/s
NVLink: 600 GB/s
PCIe Gen4: 64 GB/s
Server Options Partner and NVIDIA-Certified Systems™ with 1–8 GPUs NVIDIA HGX™ A100 — Partner and NVIDIA-Certified Systems with 4, 8, or 16 GPUs
NVIDIA DGX™ A100 with 8 GPUs
Notes *With sparsity
**Requires NVLink Bridge
***Configurable

* With sparsity
** SXM4 GPUs via HGX A100 server boards; PCIe GPUs via NVLink Bridge for up to two GPUs
*** 400W TDP for standard configuration. HGX A100-80GB custom thermal solution (CTS) SKU can support TDPs up to 500W

Contact an Elmark specialist

Have questions? Need advice? Call or write to us!