NVIDIA H100 Specs
Some details on NVIDIA’s next-gen AI accelerator have leaked just an hour ahead of the announcement.
Contrary to the rumors that NVIDIA H100 based on Hopper architecture will be using TSMC N5, NVIDIA today announced that its latest accelerator will be using a custom TSMC N4 process technology. The rumors were also wrong about the transistors count. This monolithic GPU is build with 80 billion transistors. The chip features up to 16896 FP32 CUDA cores for the SXM variant and 14592 cores for the PCIe based model.
NVIDIA H100 Specifications, Source: NVIDIA
NVIDIA H100 has HBM3 memory with 3 TB/s of bandwidth, this is 1.5x more than A100. This next-gen accelerator features 80GB of High-Bandwidth-Memory. The technology will depend on the variant though, the SXM model has HBM3 rated at 3TB/s whereas the PCIe based H100 GPU has HBM2e rated at 2TB/s.
In terms of performance, NVIDIA is claiming 3X higher compute power in FP64, TF32, FP16 and 6x higher in FP8 than A100.
The accelerator will be using PCIE Gen5 or SXM form factor. The latter will have a TDP of 700W, exactly 300W more than A100.
NVIDIA Grace SuperChips Specifications, Source: VideoCardz
NVIDIA is also launching two ARM-based Grace CPU platforms today. One features Grace CPU combined with Hopper GPU featuring 600GB of memory, whereas the other features two Grace CPUs with a total of 144 cores. It will be equipped with LPDDR5X memory. Both Grace SuperChips will be available in the first half of 2023.
|RUMORED NVIDIA Data-Center GPUs Specifications|
|VideoCardz.com||NVIDIA H100||NVIDIA A100||NVIDIA Tesla V100||NVIDIA Tesla P100|
|Die Size||814 mm²||828 mm²||815 mm²||610 mm²|
|Fabrication Node||TSMC N4||TSMC N7||12nm FFN||16nm FinFET+|
|Memory Size||80 GB HBM3/HBM2e*||40/80GB HBM2e||16/32 HBM2||16GB HBM2|
|Interface||SXM5/*PCIe Gen5||SXM4/PCIe Gen4||SXM2/PCIe Gen3||SXM/PCIe Gen3|