NVIDIA teases Hopper architecture announcement at GTC 2022
Jensen Huang, the CEO of NVIDIA, will host a keynote on March 22nd.
GTC 2022 teaser, Source: NVIDIA
The Hopper graphics architecture is more than likely to be announced next week during NVIDIA CEO keynote. This has been long awaited and rumored product series dedicated to datacenters and high performance computing.
NVIDIA Hopper architecture is believed to be the company’s first Multi-Chip-Module design featuring as many as two chiplets. Furthermore, it is rumored to be one of the largest graphics processors ever made by NVIDIA, with a single die size exceeding the reticle limit of 853 mm². Should other rumors turn out to be true, then GH100 GPU could also feature as many as 140 billion transistors, almost 2.6x more than its Ampere predecessor. The MCM design is expected to debut with GH202, while GH100 GPU might be monolithic.
The full configuration of the GPU chiplet might offer up to 144 Streaming Multiprocessors and 18,432 FP32 (CUDA) cores. This means that two chiplet design would feature as many as 288 SMs and 36,864 cores. Naturally, the actual product (such as the H100 Tensor Core) will undoubtedly have a cut-down specs to meet the required yields and performance. Its predecessor had 16% of the chip disabled and one memory module missing.
As mentioned above, the GH100 is not the only Hopper GPU that NVIDIA has now been working on. According to the leaked data posted by a hacking group that managed to access NVIDIA servers, there is also a GH202 GPU being developed. However, more details on this GPU are not yet available.
NVIDIA CEO keynote will be livestreamed on March 22 at 8 AM PST time.
NVIDIA Data-Center GPUs | ||||
---|---|---|---|---|
VideoCardz.com | NVIDIA H100 | NVIDIA A100 | NVIDIA Tesla V100 | NVIDIA Tesla P100 |
Picture | ![]() | ![]() | ![]() | ![]() |
GPU | GH100 | GA100 | GV100 | GP100 |
Transistors | ~140B | 54.2B | 21.1B | 15.3B |
Die Size | ~900 mm² | 828 mm² | 815 mm² | 610 mm² |
Architecture | Hopper | Ampere | Volta | Pascal |
Fabrication Node | TSMC N5 | TSMC N7 | 12nm FFN | 16nm FinFET+ |
GPU Clusters | ~134 | 108 | 80 | 56 |
CUDA Cores | ~17152 | 6912 | 5120 | 3584 |
Tensor Cores | TBC | 432 | 320 | – |
Memory Bus | 6144-bit (?) | 5120-bit | 4096-bit | 4096-bit |
Memory Size | 128GB HBM3 (?) | 40/80GB HBM2e | 16/32 HBM2 | 16GB HBM2 |
TDP | 250-500W (?) | 250W/300W/400W | 250W/300W/450W | 250W/300W |
Interface | SXM4/PCIe (?) | SXM3/PCIe | SXM2/PCIe | SXM/PCIe |
Launch Year | 2022 | 2020 | 2017 | 2016 |
Source: NVIDIA