MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
nvidia
Recherche

How NVIDIA is Evolving from GPUs to xPUs 

mardi 13 avril 2021, 22:49 , par eWeek
This week NVIDIA is holding its digital GPU Technology Conference (GTC). Historically, the event has been focused on developers, gamers, data scientists, auto manufacturers and other industries that have leveraged NVIDIA’s core product, the GPU. During the past few years, the company has made a number of moves, both through acquisitions and internal development, to expand its processor expertise into other types of silicon.
At GTC 2021, NVIDIA made a few key announcements in this area. These include:
NVIDIA enters CPU market with Project Grace
Historically, NVIDIA has been all about GPUs but never disputed the value of a CPU. It did, and rightfully so, point out that Moore’s Law was running out of steam and the performance of CPUs was plateauing. There have been some theories as to how to kickstart Moore’s Law including moving away from silicon itself. At GTC, NVIDIA announced Project Grace, named after Grace Hopper, a pioneer of computer programming in the 1950s. Grace aims to remove many of the limitations on the traditional x86 architecture.
The benefit of x86 is the ability to server varying configurations of CPU, memory, PCI express and peripherals to serve the needs of applications. However, processing large amounts of data has been and continues to be a problem. This is particularly true for transformers and recommender systems and is a big reason why CPU leader Intel has never cracked the AI market.
Grace solves the CPU bottlenecks
During his keynote address on April 13, NVIDIA CEO Jensen Huang gave an example to illustrate the bottleneck with a scenario where a CPU with 3X the memory of a GPU was running 40X slower. Why is that? The answer lies in the architecture where the data being processed by the CPU had to go through the PCI express bus, making it the bottleneck. With GPUs, the NVLink technology, NVIDIA created can process data across multiple GPUs at once.
This is the concept behind Grace, where the architecture is purpose-built for accelerated computing and the processing of large amounts of data, such as AI. Grace uses ARM processors, which NVIDIA is in the process of acquiring, and NVLink to create much faster CPU performance.
To be clear, Grace isn’t a replacement for the CPU in one’s PC. It’s a highly specialized processor targeting workloads such as NLP models that have billions–or even trillions–of parameters. NVIDIA claims that when Grace is combined with its GPUs, the system will deliver 10x better performance over the comparable x86 systems. Given NVIDIA’s expertise in building accelerated computing systems, this certainly seems reasonable.
NVIDA announces BlueField-3 DPU
Last year, NVIDIA announced its data processing unit, also known as a DPU. For those not familiar with DPUs, a simple way to think about it is that it’s a network interface card (NIC) that has been souped up to offload many of the CPU tasks. By offloading these functions, the CPU on the server can handle more workloads. With traditional systems, the NIC does basic networking things and lets the CPU handle tasks such as security, software-defined storage and other tasks.
Last October, NVIDIA announced its BlueField-2 and 2X DPUs. This week, the company unveiled the first DPU designed for AI and accelerated computing, the BlueField-3. The card is optimized for multi-tenant, cloud-native environments and offers software-defined, hardware-accelerated networking, storage, security and management capabilities at data center speeds. A single BlueField-3 has the processing capabilities of 300 CPU cores, freeing up precious CPU cycles on the servers.
The new DPU will have 16 ARM A78 cores, providing 400 Gbit/sec of bandwidth. It will also include hardware accelerators for storage, networking, cybersecurity, streaming, line-rate encryption and precision-timing 5G telco environments. As a point of comparison, the BlueField-2 contains 8 Arm A72 cores and offers 200 Gbit/sec of performance.
NVIDIA success comes from hardware and software innovation
As is the case with most things NVIDIA, the hardware isn’t the whole story. NVIDIA’s DOCA software development kit lets developers created DPU accelerated services using standardized APIs. It’s the combination. It’s this combination of hardware and software that has enabled NVIDIA to go roaring by rival Intel and stretch its lead.
BlueField already has a large ecosystem that includes Dell Technologies, Inspur, Lenovo and Supermicro, who are integrating BlueField into their systems. Cloud providers Baudu, JD.com and UCloud also announced they would be using the DPUs. Also, security vendors Fortinet and Guardicore announced support, as did edge systems vendors Cloudflare, Juniper and F5.
For those wondering when we might see BlueField-4, Huang indicated it would be announced in the 2024 time frame.
NVIDIA grew up and came to prominence as a GPU provider, but that’s just a part of what the company does. Its silicon expertise has now been extended to DPUs and CPUs, and the combination of the three will drive innovation faster than ever before.
Huang summed up the possibilities when he ended his keynote with the following statement: “Twenty years ago, all of this was science fiction. Ten years ago, it was a dream. Today, we are living it.”
The post How NVIDIA is Evolving from GPUs to xPUs  appeared first on eWEEK.
https://www.eweek.com/networking/how-nvidia-is-evolving-from-gpus-to-xpus/
News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Date Actuelle
jeu. 28 mars - 18:38 CET