Navigation
Recherche
|
Apple Teams Up With NVIDIA to Speed Up AI Language Models
vendredi 20 décembre 2024, 12:18 , par MacRumors
Apple earlier this year published and open-sourced Recurrent Drafter (ReDrafter), an approach that combines beam search and dynamic tree attention methods to accelerate text generation. Beam search explores multiple potential text sequences at once for better results, while tree attention organizes and removes redundant overlaps among these sequences to improve efficiency. Apple has now integrated the technology into NVIDIA's TensorRT-LLM framework, which optimizes LLMs running on NVIDIA GPUs, where it achieved 'state of the art performance,' according to Apple. The integration saw the technique manage a 2.7x speed increase in tokens generated per second during testing with a production model containing tens of billions of parameters. Apple says the improved performance not only reduces user-perceived latency but also leads to decreased GPU usage and power consumption. From Apple's Machine Learning Research blog: 'LLMs are increasingly being used to power production applications, and improving inference efficiency can both impact computational costs and reduce latency for users. With ReDrafter's novel approach to speculative decoding integrated into the NVIDIA TensorRT-LLM framework, developers can now benefit from faster token generation on NVIDIA GPUs for their production LLM applications.' Developers interested in implementing ReDrafter can find detailed information on both Apple's website and NVIDIA's developer blog.Tag: NvidiaThis article, "Apple Teams Up With NVIDIA to Speed Up AI Language Models" first appeared on MacRumors.comDiscuss this article in our forums
https://www.macrumors.com/2024/12/20/apple-nvidia-speed-up-ai-language-models/
Voir aussi |
59 sources (15 en français)
Date Actuelle
ven. 20 déc. - 19:20 CET
|