AMD Radeon Instinct MI100 to feature 120 Compute Units, expected in December

Published: Jul 30th 2020, 07:18 GMT   Comments

Please note that this post is tagged as a rumor.

Alleged specifications of the upcoming Radeon Instinct accelerator have been posted by AdoredTV.

AMD Radeon Instinct MI100 to feature 120 CUs?

AMD’s upcoming compute accelerator based on ‘new’ CDNA architecture is now expected to launch by the end of this year for EPYC-based systems, AdoredTV claims.

The specifications of the Radeon MI100 are still unknown, but we had first bits of information leaked back in February. A BIOS file revealed that the accelerator features 32GB of HBM2 memory. This has been supported by the latest article at AdoredTV.

Alleged AMD Radeon Instinct MI100 Specifications and Features, Source: AdoredTV

The website further claims that the MI100 will feature 120 Compute Units. Assuming that the CDNA architecture features 64 Processors per Cluster, it would mean that the accelerator has 7680 cores in total (if each Compute Unit had 64 cores). We are intentionally not calling them Stream Processors, because we are unsure if this is the exact name for this architecture.

What does not make sense, however, is the 42 TFLOPs claim on the slide. This would put make MI100 more than twice as fast as NVIDIA Ampere A100 (19.5 TFLOPs). To achieve 42TF, it would require either: 7680 cores running at 2.75 GHz or 15360 cores running at 1350 MHz. The latter would suggest that each CU contains 128 cores, not 64.

The following slide does mention that MI100 will indeed offer 2.4x the performance of the A100 in FP32 calculations:

AMD Radeon Instinct MI100 – offering 2.4x FP32 performance of NVIDIA 100, Source: AdoredTV

The GPU die size has not been confirmed, but it certainly is much bigger than before. The compute card based on CDNA architecture is rumored to feature the Arcturus processor. Despite the fact, that we refer to the Arcturus as GPU, the processors will not have graphics pipelines. This puts the MI100 even closer to NVIDIA’s A100 accelerator based on ‘Ampere’ architecture in terms of the capabilities. Both are expected to compete in AI, ML, and HPC markets.

Two configurations expected, first in December

Today AdoredTV posted another set of slides, showing MI100 in two server configurations: 3U with 8 GPUs and two EPYC CPUs and 1U with 2 CPUs and 4 GPUs. The latter is also expected to launch with Intel Xeon CPUs, however, the EPYC servers are now expected in December this year (1U) and in March 2021 (3U).

AMD is allegedly targeting the CDNA-based MI100 for Government Labs, Oil & Gas, Machine Learning Training and Academia.

AMD Radeon Instinct MI100 – 3U System Details, Source: AdoredTV

AMD Radeon Instinct MI100 – 1U System Details, Source: AdoredTV

AMD Radeon Instinct Series (RUMORED Specifications)
VideoCardz.comMI100MI60MI50
GPU7nm Arcturus XL7nm Vega 207nm Vega 20
Cores
 
7680?
 
4096
 
3840
Memory
 
32GB HBM2
 
32GB HBM2
 
16GB HBM2
Memory Bus
 
4096-bit
 
4096-bit
 
4096-bit
TBP
 
~300W
 
300W
 
300W

Source: AdoredTV (Slides) , AdoredTV (Specs)




Comment Policy
  1. Comments must be written in English.
  2. Comments deemed to be spam or solely promotional in nature will be deleted. Including a link to relevant content is permitted, but comments should be relevant to the post topic. Discussions about politics are not allowed on this website.
  3. Comments and usernames containing language or concepts that could be deemed offensive will be deleted.
  4. Comments complaining about the post subject or its source will be removed.
  5. A failure to comply with these rules will result in a warning and, in extreme cases, a ban. In addition, please note that comments that attack or harass an individual directly will result in a ban without warning.
  6. VideoCardz has never been sponsored by AMD, Intel, or NVIDIA. Users claiming otherwise will be banned.
  7. VideoCardz Moderating Team reserves the right to edit or delete any comments submitted to the site without notice.
  8. If you have any questions about the commenting policy, please let us know through the Contact Page.
Hide Comment Policy
Comments