NVIDIA H100 pictured up close
More than a month after GTC 2022 (conference), ServeTheHome is finally allowed to publish the photos of the NVIDIA’s latest data-center GPU.
NVIDIA H100 is an upcoming accelerator featuring 4 nm GPU made in Hopper architecture. Thus far we have only seen renders of this solution, but thanks to ServeTheHome we now get to see the new design of the SXM form factor in real photos. They confirm that the device has a board model called “PG520”:
The H100 is using TSMC CoWoS packing technology with a 814 mm² large GH100 GPU die and six memory modules around. This variant features 16896 CUDA cores and ship with 80 GB of HBM3 memory. The SXM mezzanine connectors layout has changed compared to A100. Instead of two long connectors on each side of the GPU, one is now shorter.
This Hopper solution consumes up to 700W of power, which is 250W to 300W more than previous SXM data-center GPUs based on Ampere and Volta architectures.
The H100 has recently appeared in Japan for pre-order at 33,000 USD. The only difference was that the card on sale was the PCIe Gen5 based model, whereas the card above is the SXM variant with more CUDA cores, more memory and higher power.
|NVIDIA Data-Center GPUs Specifications|
|VideoCardz.com||NVIDIA H100||NVIDIA A100||NVIDIA Tesla V100||NVIDIA Tesla P100|
|Die Size||814 mm²||828 mm²||815 mm²||610 mm²|
|Fabrication Node||TSMC N4||TSMC N7||12nm FFN||16nm FinFET+|
|Memory Size||80 GB HBM3/HBM2e*||40/80GB HBM2e||16/32 HBM2||16GB HBM2|
|Interface||SXM5/*PCIe Gen5||SXM4/PCIe Gen4||SXM2/PCIe Gen3||SXM/PCIe Gen3|