Navigation
Recherche
|
Honey, I shrunk the LLM! A beginner's guide to quantization – and testing it
dimanche 14 juillet 2024, 13:32 , par TheRegister
Just be careful not to shave off too many bits... These things are known to hallucinate as it is
Hands on If you hop on Hugging Face and start browsing through large language models, you'll quickly notice a trend: Most have been trained at 16-bit floating point of Brain-float precision. …
https://go.theregister.com/feed/www.theregister.com/2024/07/14/quantization_llm_feature/
Voir aussi |
56 sources (32 en français)
Date Actuelle
jeu. 21 nov. - 20:56 CET
|