Please note that this post is tagged as a rumor.
NVIDIA Hopper with 2.6x the transistors of GA100?
According to a rumor from Chiphell forums, NVIDIA’s next-gen data center GPU called Hopper might feature as many as 140 billion transistors.
Chiphell user Zhangzhonghao did not actually mention GH100 or any other GPU by its name, however, the context clearly refers to NVIDIA’s upcoming 5nm GPU codename Hopper. It is said that more than 140 billion transistors might be used by this AI processor.
The most recent rumors put Hopper GPUs among the largest processors ever made. It was said that might even feature a 900 mm² die, but many enthusiasts are rightly skeptical of this claim because the current EUV photomask (reticle) limit is 858 mm².
Furthermore, it is said that Hopper architecture might actually span into multiple designs, one of them (GH100) being a monolithic chip while rumored GH102 might actually be a multi-chip-module design. One should mention here is that there were also rumors about another Ampere data-center chip called GA101 (supposedly of half of the size of GA100), but this GPU was never released.
The Hopper architecture is set to compete against already-announced AMD Aldebaran GPU and Intel’s upcoming Ponte Vecchio. This might be the first time all three companies will at the same time have powerful accelerators with HBM2e memory and Multi-Chip-Module design.
NVIDIA has never acknowledged its Hopper architecture release plans, but the word on the street is that they might announce this architecture at GTC 2022 in March.
|High Performance Computing GPUs|
|VideoCardz.com||NVIDIA GH100||NVIDIA GA100||AMD Aldebaran||Intel Ponte Vecchio|
|Picture||[ not available ]|
|Architecture||NVIDIA Hopper||NVIDIA Ampere||AMD CDNA2||Intel Xe-HPC|
|Fabrication Node||TSMC N5||TSMC N7||TSMC N6||Intel 7 (base)|
TSMC N7 (Xe-Link)
TSMC N5 (compute tile)
|Full Transistor Count||>140B||54.2B||58.2B||over 100B|
|Base Tile||(2x?) ~900 mm²||~826 mm²||2x ~790 mm²||2x 640 mm²|
|Memory||TBC||up to 80GB, 5x HBM2e||up to 128GB, 8x HBM2e||up to 128GB, 8x HBM2e|
|GPU Cores||(2x?) ~134 SMs||108 SMs||2x 110 Compute Units||128 Xe Cores|
|L2/L3 Cache||TBC||40MB||2x 8MB||2x 204 MB|
|Form Factor||SXM? (~1000W)||SXM4 (400W)|
PCIe Gen4 (300W)
|Announcement Date||Q2 2022?||Q4 2020||Q4 2021||2022|