MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
chip
Recherche

Google Used Reinforcement Learning To Design Next-Gen AI Accelerator Chips

jeudi 10 juin 2021, 02:45 , par Slashdot/Apple
Chip floorplanning is the engineering task of designing the physical layout of a computer chip. In a paper published in the journal Nature, Google researchers applied a deep reinforcement learning approach to chip floorplanning, creating a new technique that 'automatically generates chip floorplans that are superior or comparable to those produced by humans in all key metrics, including power consumption, performance and chip area.' VentureBeat reports: The Google team's solution is a reinforcement learning method capable of generalizing across chips, meaning that it can learn from experience to become both better and faster at placing new chips. Training AI-driven design systems that generalize across chips is challenging because it requires learning to optimize the placement of all possible chip netlists (graphs of circuit components like memory components and standard cells including logic gates) onto all possible canvases. The researchers' system aims to place a 'netlist' graph of logic gates, memory, and more onto a chip canvas, such that the design optimizes power, performance, and area (PPA) while adhering to constraints on placement density and routing congestion. The graphs range in size from millions to billions of nodes grouped in thousands of clusters, and typically, evaluating the target metrics takes from hours to over a day.

Starting with an empty chip, the Google team's system places components sequentially until it completes the netlist. To guide the system in selecting which components to place first, components are sorted by descending size; placing larger components first reduces the chance there's no feasible placement for it later. Training the system required creating a dataset of 10,000 chip placements, where the input is the state associated with the given placement and the label is the reward for the placement (i.e., wirelength and congestion). The researchers built it by first picking five different chip netlists, to which an AI algorithm was applied to create 2,000 diverse placements for each netlist. The system took 48 hours to 'pre-train' on an Nvidia Volta graphics card and 10 CPUs, each with 2GB of RAM. Fine-tuning initially took up to 6 hours, but applying the pre-trained system to a new netlist without fine-tuning generated placement in less than a second on a single GPU in later benchmarks. In one test, the Google researchers compared their system's recommendations with a manual baseline: the production design of a previous-generation TPU chip created by Google's TPU physical design team. Both the system and the human experts consistently generated viable placements that met timing and congestion requirements, but the AI system also outperformed or matched manual placements in area, power, and wirelength while taking far less time to meet design criteria.

Read more of this story at Slashdot.
rss.slashdot.org/~r/Slashdot/slashdot/~3/TYwEg0Xjd3c/google-used-reinforcement-learning-to-design-ne...
News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Date Actuelle
sam. 20 avril - 00:31 CEST