NVIDIA A100 PCIe 80GB
NVIDIA formally introduces its A100 variant with a PCI Express interface and 80GB of memory.
Back in November NVIDIA introduced an 80GB variant of the A100 accelerator based on the SXM4 form factor. This variant offered twice the memory capacity of the original Ampere GA100 based model as well as higher bandwidth. Seven months later, NVIDIA also introduces a PCIe-based model featuring the exact same features, except on a standard interface and with lower TDP.
NVIDIA A100 PCIe 80GB is based on 7nm Ampere GA100 GPU featuring 6192 CUDA cores. The bandwidth on this variant increases to 2039 GB/s (over 484 GB/s more than A100 40GB). This is achieved using faster memory with an effective speed of 3186 Gbps.
This GPU is computer-oriented which means it has no gaming purposes, at least not in this form. This product is strictly for high-performance computing to accelerate training using deeplearning algorithms.
Furthermore, NVIDIA announced its GPUDirect Storage feature, which is similar to consumer Microsoft DirectStorage technology. In consumer space, it gives access to fast NVMe storage which can boost loading times in certain workloads. NVIDIA’s technology appears to focus on a similar type of access except to the large memory pool on the GPU, in this case, 80GB of faster HBM2e memory.
NVIDIA Compute Accelerator Series (Formely Tesla) | |||||
---|---|---|---|---|---|
VideoCardz.com | A100 PCIe | A100 SXM | Tesla V100s | Tesla V100 | Tesla P100 |
Picture | ![]() | ![]() | ![]() | ![]() | ![]() |
GPU | 7nm GA100 | 7nm GA100 | 12nm GV100 | 12nm GV100 | 16nm GP100 |
Die Size | |||||
Transistors | |||||
SMs | |||||
CUDA Cores | |||||
Tensor Cores | NA | ||||
FP16 Compute | |||||
FP32 Compute | |||||
FP64 Compute | |||||
Boost Clock | |||||
Bandwidth | |||||
Eff. Memory Clock | |||||
Memory Config. | |||||
Memory Bus | |||||
TDP | |||||
Form Factor | PCIe 4.0 | SXM4 | PCIe 3.0 | SXM2 / PCIe 3.0 | SXM |
Source: HardwareLuxx