Navigation
Recherche
|
'AI Ambition is Pushing Copper To Its Breaking Point'
vendredi 29 novembre 2024, 12:30 , par Slashdot
According to researchers at Fujitsu, the number of parameters in AI systems is growing 32-fold approximately every three years. To support these models, chip designers like Nvidia use extremely high-speed interconnects -- on the order of 1.8 terabytes a second -- to make eight or more GPUs look and behave like a single device. The problem though, is that the faster you shuffle data across a wire, the shorter the distance at which the signal can be maintained. At those speeds, you're limited to about a meter or two over copper cables. The alternative is to use optics, which can maintain a signal over a much larger distance. In fact, optics are already employed in many rack-to-rack scale-out fabrics like those used in AI model training. Unfortunately, in their current form, pluggable optics aren't particularly efficient or particularly fast. Earlier in 2024 at GTC, Nvidia CEO Jensen Huang said that if the company had used optics as opposed to copper to stitch together the 72 GPUs that make up its NVL72 rack systems, it would have required an additional 20 kilowatts of power. Read more of this story at Slashdot.
https://tech.slashdot.org/story/24/11/29/1128242/ai-ambition-is-pushing-copper-to-its-breaking-point...
Voir aussi |
56 sources (32 en français)
Date Actuelle
mer. 4 déc. - 09:21 CET
|