MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
scott
Recherche

Microsoft CTO Kevin Scott Thinks LLM 'Scaling Laws' Will Hold Despite Criticism

mardi 16 juillet 2024, 01:30 , par Slashdot
An anonymous reader quotes a report from Ars Technica: During an interview with Sequoia Capital's Training Data podcast published last Tuesday, Microsoft CTO Kevin Scott doubled down on his belief that so-called large language model (LLM) 'scaling laws' will continue to drive AI progress, despite some skepticism in the field that progress has leveled out. Scott played a key role in forging a $13 billion technology-sharing deal between Microsoft and OpenAI. 'Despite what other people think, we're not at diminishing marginal returns on scale-up,' Scott said. 'And I try to help people understand there is an exponential here, and the unfortunate thing is you only get to sample it every couple of years because it just takes a while to build supercomputers and then train models on top of them.'

LLM scaling laws refer to patterns explored by OpenAI researchers in 2020 showing that the performance of language models tends to improve predictably as the models get larger (more parameters), are trained on more data, and have access to more computational power (compute). The laws suggest that simply scaling up model size and training data can lead to significant improvements in AI capabilities without necessarily requiring fundamental algorithmic breakthroughs. Since then, other researchers have challenged the idea of persisting scaling laws over time, but the concept is still a cornerstone of OpenAI's AI development philosophy. Scott's comments can be found around the 46-minute mark.

Read more of this story at Slashdot.
https://slashdot.org/story/24/07/15/2032259/microsoft-cto-kevin-scott-thinks-llm-scaling-laws-will-h...

Voir aussi

News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Date Actuelle
mar. 5 nov. - 09:34 CET