|
Navigation
Recherche
|
Nvidia licenses Groq’s inferencing chip tech and hires its leaders
mardi 30 décembre 2025, 16:24 , par ComputerWorld
Nvidia has licensed intellectual property from inferencing chip designer Groq, and hired away some of its senior executives, but stopped short of an outright acquisition.
“We’ve taken a non-exclusive license to Groq’s IP and have hired engineering talent from Groq’s team to join us in our mission to provide world-leading accelerated computing technology,” an Nvidia spokesman said Tuesday, via email. But, he said, “We haven’t acquired Groq.” Groq designs and sells chips optimized for AI inferencing. These chips, which Groq calls language processing units (LPUs), are lower-powered, lower-priced devices than the GPUs Nvidia designs and sells, which these days are primarily used for training AI models. As the AI market matures, and usage shifts from the creation of AI tools to their use, demand for devices optimized for inferencing is likely to grow. The company also rents out its chips, operating an inferencing-as-a-service business called GroqCloud. Groq itself announced the deal and the executive moves on Dec. 24, saying “it has entered into a non-exclusive licensing agreement with Nvidia for Groq’s inference technology” and that, as part fo the agreement, “Jonathan Ross, Groq’s Founder, Sunny Madra, Groq’s President, and other members of the Groq team will join Nvidia to help advance and scale the licensed technology.” The deal could be worth as much as $20 billion, TechCrunch reported. A way out of the memory squeeze? There’s tension throughout the supply chain for chips used for AI applications, leading to Nvidia’s CFO reporting in its last earnings call that some of its chips are “sold out” or “fully utilized.” One of the factors contributing to this identified by analysts is a shortage of high-bandwidth memory. Finding ways to make their AI operations less dependent on scarce memory chips is becoming a key objective for AI vendors and enterprise buyers alike. A significant difference between Groq’s chip designs and Nvidia’s is the type of memory each uses. Nvidia’s fastest chips are designed to work with high-bandwidth memory, the price of which – like that of other fast memory technologies — is soaring due to limited production capacity and rising demand in AI-related applications. Groq, meanwhile, integrates static RAM into its chip designs. It says SRAM is faster and less power-hungry than the dynamic RAM used by competing chip technologies — and another advantage is that it’s not (yet) as scarce as the high-bandwidth memory or DDR5 DRAM used elsewhere. Licensing Groq’s technology opens the way for Nvidia to diversify its memory sourcing. Not an acquisition By structuring its relationship with Groq as an IP licensing deal, and hiring the engineers it is most interested in rather than buying their employer, Nvidia avoids taking on the GroqCloud service business just as it is reportedly stepping back from its own service business, DGX cloud, and restructuring it as an internal engineering service. It could also escape much of the antitrust scrutiny that would have accompanied a full-on acquisition. Nvidia did not respond to questions about the names and roles of the former Groq executives it has hired. However, Groq’s founder, Jonathan Ross, reports on his LinkedIn profile that he is now chief software architect at Nvidia, while that of Groq’s former president, Sunny Madra, says he is now Nvidia’s VP of hardware. What’s left of Groq will be run by Simon Edwards, formerly CFO at sales automation software vendor Conga. He joined Groq as CFO just three months ago. This article first appeared on Network World.
https://www.computerworld.com/article/4112137/nvidia-licenses-groqs-inferencing-chip-tech-and-hires-...
Voir aussi |
56 sources (32 en français)
Date Actuelle
mar. 30 déc. - 20:03 CET
|








