Create PDF | Compare |
PRODUCT INFO ID: 5819
PNY NVIDIA A30 Module 24GB HBM2 ECC 3072-bit. PCI-E 4.0 x16
NVIDIA A30
Versatile Compute Acceleration for Mainstream Enterprise Servers
Bring accelerated performance to every enterprise workload with NVIDIA A30 Tensor Core GPUs. With NVIDIA Ampere architecture Tensor Cores and Multi-Instance GPU (MIG), it delivers speedups securely across diverse workloads, including AI inference at scale and high-performance computing (HPC) applications. By combining fast memory bandwidth and low-power consumption in a PCIe form factor optimized for mainstream servers, A30 enables an elastic data center and delivers maximum value for enterprises.
The NVIDIA A30 Tensor Core GPU delivers a versatile platform for mainstream enterprise workloads, like AI inference, training, and HPC. With TF32 and FP64 Tensor Core support, as well as an end-to-end software and hardware solution stack, A30 ensures that mainstream AI training and HPC applications can be rapidly addressed. Multi-instance GPU (MIG) ensures quality of service (QoS) with secure, hardware-partitioned, right-sized GPUs across all of these workloads for diverse users, optimally utilizing GPU compute resources.
Architecture | Ampere |
---|---|
CUDA Cores | 3804 |
Tensor Cores | 224 |
FP32 Performance | 10.3 TFLOPS |
GPU Memory | 24 GB |
GPU Memory Type | HBM2 ECC |
Memory Bandwidth | 933 GB/s |
Memory Interface | 3072-bit |
System Interface | PCI-E 4.0 x16 |
Max Power Consumption | 165 W |
Form Factor | Full Height Dual Slot |
Thermal Management | Passive |