NVIDIA ‘hopped up’ for GTC 2022, the company set to announce Hopper GPU architecture next week

Published: Mar 18th 2022, 19:25 GMT   Comments

NVIDIA teases Hopper architecture announcement at GTC 2022

Jensen Huang, the CEO of NVIDIA, will host a keynote on March 22nd. 

GTC 2022 teaser, Source: NVIDIA

The Hopper graphics architecture is more than likely to be announced next week during NVIDIA CEO keynote. This has been long awaited and rumored product series dedicated to datacenters and high performance computing.

NVIDIA Hopper architecture is believed to be the company’s first Multi-Chip-Module design featuring as many as two chiplets. Furthermore, it is rumored to be one of the largest graphics processors ever made by NVIDIA, with a single die size exceeding the reticle limit of 853 mm². Should other rumors turn out to be true, then GH100 GPU could also feature as many as 140 billion transistors, almost 2.6x more than its Ampere predecessor. The MCM design is expected to debut with GH202, while GH100 GPU might be monolithic.

The full configuration of the GPU chiplet might offer up to 144 Streaming Multiprocessors and 18,432 FP32 (CUDA) cores. This means that two chiplet design would feature as many as 288 SMs and 36,864 cores. Naturally, the actual product (such as the H100 Tensor Core) will undoubtedly have a cut-down specs to meet the required yields and performance. Its predecessor had 16% of the chip disabled and one memory module missing.

As mentioned above, the GH100 is not the only Hopper GPU that NVIDIA has now been working on. According to the leaked data posted by a hacking group that managed to access NVIDIA servers, there is also a GH202 GPU being developed. However, more details on this GPU are not yet available.

NVIDIA CEO keynote will be livestreamed on March 22 at 8 AM PST time.

NVIDIA Data-Center GPUs
VideoCardz.comNVIDIA H100NVIDIA A100NVIDIA Tesla V100NVIDIA Tesla P100
Picture
GPUGH100GA100GV100GP100
Transistors~140B54.2B21.1B15.3B
Die Size~900 mm²828 mm²815 mm²610 mm²
ArchitectureHopperAmpereVoltaPascal
Fabrication NodeTSMC N5TSMC N712nm FFN16nm FinFET+
GPU Clusters~1341088056
CUDA Cores~17152691251203584
Tensor CoresTBC432320
Memory Bus6144-bit (?)5120-bit4096-bit4096-bit
Memory Size128GB HBM3 (?)40/80GB HBM2e16/32 HBM216GB HBM2
TDP250-500W (?)250W/300W/400W250W/300W/450W250W/300W
InterfaceSXM4/PCIe (?)SXM3/PCIeSXM2/PCIeSXM/PCIe
Launch Year2022202020172016

Source: NVIDIA




Comment Policy
  1. Comments must be written in English and should not exceed 1000 characters.
  2. Comments deemed to be spam or solely promotional in nature will be deleted. Including a link to relevant content is permitted, but comments should be relevant to the post topic. Discussions about politics are not allowed on this website.
  3. Comments and usernames containing language or concepts that could be deemed offensive will be deleted.
  4. Comments complaining about the post subject or its source will be removed.
  5. A failure to comply with these rules will result in a warning and, in extreme cases, a ban. In addition, please note that comments that attack or harass an individual directly will result in a ban without warning.
  6. VideoCardz has never been sponsored by AMD, Intel, or NVIDIA. Users claiming otherwise will be banned.
  7. VideoCardz Moderating Team reserves the right to edit or delete any comments submitted to the site without notice.
  8. If you have any questions about the commenting policy, please let us know through the Contact Page.
Hide Comment Policy
Comments