AMD RX 7900 XT now supported by ROCm 5.7
Three weeks ago, AMD announced that it would support the first RDNA3 GPUs through its ROCm platform for PyTorch. Today, new gaming GPU is being added to this list.
AMD is willing to put some money and effort into supporting machine learning development on consumer hardware. AMDs engineers are actively working on adding additional RDNA3 GPUs to be available to accelerate PyTorch. This open-source and popular framework can be used to train new neural networks. It takes advantage of tensor cores, or in the case of AMD, their AI Accelerators.
AMD is promoting its latest ROCm 5.7 platform, AMD’s own open-source initiative and software stack that enables GPU computation for these artificial intelligence workloads on Ubuntu 22.04.3 operating system. The ROCm provides supports for different machine learning models such as HIP, OpenMP or OpenCL, but also offers integration of neural learning frameworks like PyTorch or TensorFlow.
The latest news is that the hardware support is extending to Radeon RX 7900 XT which features 20 GB of memory and 168 AI Accelerators. It is based on the same Navi 31 GPU as the RX 7900 XT, but with fewer cores enabled. It is still a high-end GPU, but at an even more affordable price of around $749 (affiliate link), down from its original MSRP of $899 for quite some time now.
“We are excited about this latest addition to our portfolio. In combination with ROCm, these high-end GPUs make AI more accessible both from a software and hardware perspective, so developers can choose the solution that best fits their needs”,
— Erik Hultgren, Software Product Manager at AMD.
AMD ROCm 5.7 already supports Radeon RX 7900 XTX and Radeon PRO W7900, equipped with 24GB and 48GB memory capacity respectively. Thus far, AMD made no comments on potential support for other GPUs such as Navi 32 based RX 7800 XT 16GB or Navi 31 based RX 7900 GRE 20GB. One would also want to see AMD adding RX 6800/6900 series which feature 16GB memory as well.
While it is great news AMD is adding more GPUs to ROCm, it still feels underwhelming compared to CUDA support on NVIDIA cards, which can practically be taken for granted across most GPUs, notes Phoronix.