Mohammed Abdulrauf
لدي اهتمام وخبرة بعدة مجالات ابرزها المونتاج وكتابة المراجعات والتصوير والالعاب والرياضة احب التقنية والكمبيوتر وتركيبه وتطويره واحاول تطوير نفسي في هذه المجالات
At the Data Center and AI Technology Premiere event on Tuesday, June 15, AMD recently unveiled its Instinct MI300X GPU. Although the new accelerator model’s power usage was not included in the keynote presentation, Hoang Anh Phu, one of the tipsters, was nevertheless able to find out about it in Team Red’s post-event footnotes. A comparison was made: “MI300X (192 GB HBM3, OAM Module) TBP is 750 W, compared to last gen, MI250X TBP is only 500-560 W.” Last month’s Giga Computing roadmap, which was leaked, predicted that server-grade GPUs will reach the 700 W threshold.
The Hopper H100 from NVIDIA is now the most power-hungry data center enterprise GPU, requiring up to 700 W. With a marginally higher rating, Team Green’s flagship is now surpassed by the OCP Accelerator Module-based MI300X design. With 304 CDNA 3 compute units, AMD’s new “leadership generative AI accelerator” significantly outperforms the MI250X’s 220 (CDNA 2) CUs. The MI300X can now be specified with up to 192 GB of memory, whereas the MI250X is only able to support 128 GB of memory due to its slower HBM2E stacks. Engineers have also developed new 24G B HBM3 stacks. With the MI300X compared to the H100, we hope to have sample units producing benchmark results very soon.
لدي اهتمام وخبرة بعدة مجالات ابرزها المونتاج وكتابة المراجعات والتصوير والالعاب والرياضة احب التقنية والكمبيوتر وتركيبه وتطويره واحاول تطوير نفسي في هذه المجالات