Navigation
Recherche
|
Meta's Next Llama AI Models Are Training on a GPU Cluster 'Bigger Than Anything' Else
jeudi 31 octobre 2024, 18:20 , par Slashdot
Increasing the scale of AI training with more computing power and data is widely believed to be key to developing significantly more capable AI models. While Meta appears to have the lead now, most of the big players in the field are likely working toward using compute clusters with more than 100,000 advanced chips. In March, Meta and Nvidia shared details about clusters of about 25,000 H100s that were used to develop Llama 3. Read more of this story at Slashdot.
https://tech.slashdot.org/story/24/10/31/1319259/metas-next-llama-ai-models-are-training-on-a-gpu-cl...
Voir aussi |
56 sources (32 en français)
Date Actuelle
jeu. 21 nov. - 15:33 CET
|