NVIDIA working on H100 with 120GB
A mysterious PCIe-based graphics card called G100 120GB has been spotted running alongside RTX 3090 Ti and RTX 4090 ES.
Powered by GH100 processor, the 120GB variant is just as fast as the SXM variant, the report from s-ss claims. A screenshot published by the website shows a supposed H100 processor with 120GB of HBM2e memory, however unlike the existing 80GB model, this variant has increased bandwidth to 3 TB/s.
Such bandwidth is only available with the SXM variant (NVIDIA proprietary mezzanine connector) which is unlocked for higher power, but this model also uses HBM3. Worth noting, to support 120GB of capacity, each of the five working stacks would have to be 24GB.
Furthermore, the H100 120GB PCIe reportedly has the same GPU specs as the SXM variant, which in this case means 16896 CUDA Cores and 528 Tensors.
Alleged NVIDIA H100 120GB PCIe, Source: S-SS
What is interesting about this leak is that the sample of H100 120GB PCIe was listed in a Windows Device Manager alongside ‘RTX ADLCE Engineering Sample’. The ADLCE obviously stands for Ada Lovelace, and it is a preproduction unit with TDP limited to 350W (final specs are 450W). As a result, the single-precision compute performance is said to be limited to 60 TFLOPs (retail unit has 82 TFLOPS).
NVIDIA Data-Center GPUs Specifications | |||
---|---|---|---|
VideoCardz.com | NVIDIA H100 SXM | NVIDIA H100 120GB PCIe | NVIDIA H100 80GB PCIe |
Picture | ![]() | ![]() | ![]() |
GPU | GH100 | GH100 | GH100 |
Transistors | 80B | 80B | 80B |
Die Size | 814 mm² | 814 mm² | 814 mm² |
Architecture | Hopper | Hopper | Hopper |
Fabrication Node | TSMC 4N | TSMC 4N | TSMC 4N |
GPU Clusters | 132 | 132 | 114 |
CUDA Cores | 16896 | 16896 | 14592 |
L2 Cache | 50MB | 50MB | 50MB |
Tensor Cores | 528 | 528 | 456 |
Memory Bus | 5120-bit | 5120-bit | 5120-bit |
Memory Size | 80 GB HBM3 | 120GB HBM2e | 80GB HBM2e |
TDP | 700W | TBC | 350W |
Interface | SXM5 | PCIe Gen5 | PCIe Gen5 |
Launch Year | 2022 | 2022 | 2022 |
Source: s-ss via MegaSizeGPU