NVIDIA announces A100 Tensor Core GPU specifications

Published: 14th May 2020, 11:49 | Comments

newz

NVIDIA Tesla A100 features 6912 CUDA Cores

The card features 7nm Ampere GA100 GPU with 6912 CUDA cores and 432 Tensor cores. This GPU has a die size of 826mm2  and 54-billion transistors.  The GPU is divided into 108 Streaming Multiprocessors. The GPU in Tesla A100 is clearly not the full chip.

The card features third-generation NVLINK with bi-directional bandwidth of 4.8 TB/s with the server and 600 GB/s in GPU to GPU interconnection.

Tesla A100 features 40GB of HBM2 memory across 5120 memory bus.

NVIDIA Tesla Series
VideoCardz.comA100Tesla V100sTesla V100Tesla P100
Picture
GPU7nm GA10012nm GV10012nm GV10016nm GP100
Die Size
 
826 mm^2
 
815 mm^2
 
815 mm^2
 
610 mm^2
Transistors
 
54 billion
 
21.1 billion
 
21.1 billion
 
15.3 billion
SMs
 
108
 
80
 
80
 
56
CUDA Cores
 
6912
 
5120
 
5120
 
3840
Tensor Cores
 
432
 
640
 
640
NA
FP16 Compute
 
78 TFLOPS
 
32.8 TFLOPS
 
31.4 TFLOPS
 
21.2 TFLOPS
FP32 Compute
 
19.5 TFLOPS
 
16.4 TFLOPS
 
15.7 TFLOPS
 
10.6 TFLOPS
FP64 Compute
 
9.7 TFLOPS
 
8.2 TFLOPS
 
7.8 TFLOPS
 
5.3 TFLOPS
Boost Clock
 
~1410MHz
 
~1601 MHz
 
~1533 MHz
 
~1480MHz
Max. Memory Bandwidth
 
1134 GB/s
 
1134 GB/s
 
900 GB/s
 
721 GB/s
Eff. Memory Clock
 
2430 MHz
 
2214 MHz
 
1760 MHz
 
1408 MHz
Memory Config.
 
40GB HBM2e
 
32GB HBM2
 
16GB / 32GB HBM2
 
16GB HBM2
Memory Bus
 
5120-bit
 
4096-bit
 
4096-bit
 
4096-bit
TDP
 
400
 
250W
 
300W
 
300W
Form FactorSXM4 / PCIe 4.0PCIe 3.0SXM2 / PCIe 3.0SXM


by WhyCry

Previous Post
NVIDIA confirms Ampere architecture to launch under GeForce series
Next Post
Watch NVIDIA CEO Jensen Huang GTC 2020 "AMPERE" Keynote here





Back to Top ↑