MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
waymo
Recherche

Waymo Explores Using Google's Gemini To Train Its Robotaxis

samedi 2 novembre 2024, 01:10 , par Slashdot
Waymo is advancing autonomous driving with a new training model for its robotaxis built on Google's multimodal large language model (MLLM) Gemini. The Verge reports: Waymo released a new research paper today that introduces an 'End-to-End Multimodal Model for Autonomous Driving,' also known as EMMA. This new end-to-end training model processes sensor data to generate 'future trajectories for autonomous vehicles,' helping Waymo's driverless vehicles make decisions about where to go and how to avoid obstacles. But more importantly, this is one of the first indications that the leader in autonomous driving has designs to use MLLMs in its operations. And it's a sign that these LLMs could break free of their current use as chatbots, email organizers, and image generators and find application in an entirely new environment on the road. In its research paper, Waymo is proposing 'to develop an autonomous driving system in which the MLLM is a first class citizen.'

The paper outlines how, historically, autonomous driving systems have developed specific 'modules' for the various functions, including perception, mapping, prediction, and planning. This approach has proven useful for many years but has problems scaling 'due to the accumulated errors among modules and limited inter-module communication.' Moreover, these modules could struggle to respond to 'novel environments' because, by nature, they are 'pre-defined,' which can make it hard to adapt. Waymo says that MLLMs like Gemini present an interesting solution to some of these challenges for two reasons: the chat is a 'generalist' trained on vast sets of scraped data from the internet 'that provide rich 'world knowledge' beyond what is contained in common driving logs'; and they demonstrate 'superior' reasoning capabilities through techniques like 'chain-of-thought reasoning,' which mimics human reasoning by breaking down complex tasks into a series of logical steps.

Waymo developed EMMA as a tool to help its robotaxis navigate complex environments. The company identified several situations in which the model helped its driverless cars find the right route, including encountering various animals or construction in the road. But EMMA also has its limitations, and Waymo acknowledges that there will need to be future research before the model is put into practice. For example, EMMA couldn't incorporate 3D sensor inputs from lidar or radar, which Waymo said was 'computationally expensive.' And it could only process a small amount of image frames at a time. There are also risks to using MLLMs to train robotaxis that go unmentioned in the research paper. Chatbots like Gemini often hallucinate or fail at simple tasks like reading clocks or counting objects.

Read more of this story at Slashdot.
https://tech.slashdot.org/story/24/11/01/2150228/waymo-explores-using-googles-gemini-to-train-its-ro...

Voir aussi

News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Date Actuelle
dim. 22 déc. - 06:29 CET