|
Navigation
Recherche
|
‘Robot, Build It’: MIT’s New AI Lets You Build Real Objects Just by Describing Them
mercredi 17 décembre 2025, 16:50 , par eWeek
Say it, picture it, and watch it take shape. That’s the idea behind a new MIT system that turns simple words into real, usable objects.
Researchers at MIT have developed an AI-powered robotic assembly system that allows people to design and build physical objects just by describing them in plain language. Instead of learning complex computer-aided design (CAD) tools, users can simply describe what they want, such as a lamp or shelf, and the system handles the rest. The project combines generative AI with robotic assembly to accelerate design and make it more accessible, especially for people without technical training. How the system works The process begins with a text or spoken prompt, such as “make me a chair.” From there, a generative AI model creates a 3D digital version of the object. A second AI model then reasons about the object’s shape and purpose and decides where each part should go based on how the object is meant to be used. The system relies on two types of prefabricated parts, structural pieces and panels. A vision-language model (VLM) analyzes the geometry of the object and determines where panels are needed, such as on a chair’s seat or backrest. Once the design is finalized, a robotic system automatically assembles the object. “Sooner or later, we want to be able to communicate and talk to a robot and AI system the same way we talk to each other to make things together. Our system is a first step toward enabling that future,” said Alex Kyaw, lead author of the research. Humans stay in control Unlike fully automated systems, users remain involved throughout the design process. They can give feedback and refine the object as it takes shape, guiding the AI toward their personal preferences. “The design space is very big, so we narrow it down through user feedback. We believe this is the best way to do it because people have different preferences, and building an idealized model for everyone would be impossible,” Kyaw said. Richa Gupta, a co-author of the paper, added that “the human-in-the-loop process allows the users to steer the AI-generated designs and have a sense of ownership in the final result.” While the current system focuses on simple, multi-part objects, the researchers say the framework could eventually be used for more complex designs, including architectural elements or aerospace components. In the long term, the technology could enable people to build furniture or other items locally, rather than shipping bulky products from factories. “Our hope is to drastically lower the barrier of access to design tools. We have shown that we can use generative AI and robotics to turn ideas into physical objects in a fast, accessible, and sustainable manner,” said Randall Davis, senior author of the study. Also read: MIT built a virtual playground where robots learn to think. The post ‘Robot, Build It’: MIT’s New AI Lets You Build Real Objects Just by Describing Them appeared first on eWEEK.
https://www.eweek.com/news/mit-ai-powered-robotic-assembly/
Voir aussi |
56 sources (32 en français)
Date Actuelle
mer. 17 déc. - 19:37 CET
|








