For the Chinese market, NVIDIA is preparing the H800 adaptation of the H100 GPU.

The H100 accelerator from NVIDIA is one of the most potent options for supporting AI workloads. Naturally, every organization and institution wants to use it to fuel their AI workload. But, it might be difficult to transfer products created in the US to places like China. Due to export restrictions, NVIDIA had to get inventive and create a special H100 GPU model called the H800 for the Chinese market. Late last year, NVIDIA also developed a variant of the A100 model specifically for China called A800, with the main difference being a dip in chip-to-chip interconnect bandwidth from 600 GB/s to 400 GB/s.

Similar limits apply to the H800 SKU for this year, and it appears that the manufacturer made comparable compromises to send its chips to China. The H800’s bi-directional chip-to-chip connectivity bandwidth is reduced from the 600 GB/s of the standard H100 PCIe variant to just 300 GB/s. Although we lack information on whether the number of CUDA or Tensor cores has been modified, the loss of bandwidth to adhere to export laws would have repercussions. Training big models will raise the latency and slow the workload relative to the standard H100 chip since the communication speed is reduced. This is brought on by the enormous amount of data that must be transferred between chips. NVIDIA’s representative reportedly declined to address additional variances, saying that “our 800 series products are entirely compliant with export control rules.” Reuters reports that this decision was made.

About Mohammed Abdulrauf

لدي اهتمام وخبرة بعدة مجالات ابرزها المونتاج وكتابة المراجعات والتصوير والالعاب والرياضة
احب التقنية والكمبيوتر وتركيبه وتطويره واحاول تطوير نفسي في هذه المجالات

About author

Mohammed Abdulrauf

لدي اهتمام وخبرة بعدة مجالات ابرزها المونتاج وكتابة المراجعات والتصوير والالعاب والرياضة احب التقنية والكمبيوتر وتركيبه وتطويره واحاول تطوير نفسي في هذه المجالات