MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
google
Recherche

For the first time ever, I wish Google would act more like Amazon

mercredi 22 janvier 2025, 11:45 , par ComputerWorld
Fair warning: This isn’t your average article about what’s happening with all the newfangled AI hullabaloo in this weird and wild world of ours.

Nope — there’ll be no “oohing” and “ahhing” or talk about how systems like Gemini and ChatGPT and their brethren are, like, totally gonna revolutionize the world and change life as we know it.

Instead, I want to look at the state of these generative AI systems through as practical and realistic of a lens as possible — focusing purely on how they work right now and what they’re able to accomplish.

And with that in mind, my friend, there’s no way around it: These things seriously suck.

Sorry for the bluntness, but for Goog’s sake, someone’s gotta say it. For all their genuinely impressive technological feats and all the interesting ways they’re able to help with mundane work tasks, Google’s Gemini and other such generative AI systems are doing us all a major disservice in one key area — and everyone seems content in looking the other way and pretending it isn’t a problem.

That’s why I was so pleasantly surprised to see that one tech giant seemingly isn’t taking the bait and is instead lagging behind and taking its time to get this right instead of rushing it out half-baked, like everyone else.

It’s the antithesis to the strategy we’re seeing play out from Google and virtually every other tech player right now. And my goodness, is it ever a refreshing contrast.

[Get level-headed insight in your inbox with my Android Intelligence newsletter. Three new things to know and try each Friday!]

The Google Gemini Bizarro World

I won’t keep you waiting: The company that’s getting it right, at least in terms of its process and philosophy, is none other than Amazon.

I’ll be the first to admit: I’m typically not a huge fan of Amazon or its approach. But within this specific area, it really is creating a model for how tech companies should be thinking about these generative AI systems.

My revelation comes via a locked-down article that went mostly unnoticed at The Financial Times last week. The report’s all about how Amazon is scrambling to upgrade its Alexa virtual assistant with generative AI and relaunch it as a powerful “agent” for offering up complex answers and completing all kinds of online tasks.

More of the same, right? Sure sounds that way — but hang on: There’s a twist.

Allow me to quote a pertinent passage from behind the paywall for ya:

Rohit Prasad, who leads the artificial general intelligence (AGI) team at Amazon, told the Financial Times the voice assistant still needed to surmount several technical hurdles before the rollout.

This includes solving the problem of “hallucinations” or fabricated answers, its response speed or “latency,” and reliability. 

“Hallucinations have to be close to zero,” said Prasad. “It’s still an open problem in the industry, but we are working extremely hard on it.” 

(Insert exaggerated record-scratch sound effect here.)

Wait — what? Did we read that right?!

Let’s look to another passage to confirm:

One former senior member of the Alexa team said while LLMs were very sophisticated, they came with risks, such as producing answers that were “completely invented some of the time.”

“At the scale that Amazon operates, that could happen large numbers of times per day,” they said, damaging its brand and reputation.

Well, tickle me tootsies and call me Tito. Someone actually gives a damn.

If the contrast here still isn’t apparent, let me spell it out: These large-language-model systems — the type of technology under the hood of Gemini, ChatGPT, and pretty much every other generative AI service we’ve seen show up over the past year or two — they don’t really know anything, in any human-like sense. They work purely by analyzing massive amounts of data, observing patterns within that data, and then using sophisticated statistics to predict what word is likely to come next in any scenario — relying on all the info they’ve ingested as a guide.

Or, put into layman’s terms: They have no idea what they’re saying or if it’s right. They’re just coughing up characters based on patterns and probability.

And that gets us to the core problem with these systems and why, as I put it so elegantly a moment ago, they suck.

As I mused whilst explaining why Gemini is, in many ways, the new Google+ recently:

The reality … is that large-language models like Gemini and ChatGPT are wildly impressive at a very small set of specific, limited tasks. They work wonders when it comes to unambiguous data processing, text summarizing, and other low-level, closely defined and clearly objective chores. That’s great! They’re an incredible new asset for those sorts of purposes.

But everyone in the tech industry seems to be clamoring to brush aside an extremely real asterisk to that — and that’s the fact that Gemini, ChatGPT, and other such systems simply don’t belong everywhere. They aren’t at all reliable as “creative” tools or tools intended to parse information and provide specific, factual answers. And we, as actual human users of the services associated with this stuff, don’t need this type of technology everywhere — and might even be actively harmed by having it forced into so many places where it doesn’t genuinely belong.

That, m’dear, is a pretty pressing problem.

Allow me to borrow a quote collected by my Computerworld colleague Lucas Mearian in a thoroughly reported analysis of how, exactly, these large-language models work:

“Hallucinations happen because LLMs, in their in most vanilla form, don’t have an internal state representation of the world,” said Jonathan Siddharth, CEO of Turing, a Palo Alto, California company that uses AI to find, hire, and onboard software engineers remotely. “There’s no concept of fact. They’re predicting the next word based on what they’ve seen so far — it’s a statistical estimate.”

And there we have it.

That’s why Gemini, ChatGPT, and other such systems so frequently serve up inaccurate info and present it as fact — something that’s endlessly amusing to see examples of, sure, but that’s also an extremely serious issue. What’s more, it’s only growing more and more prominent as these systems show up everywhere and increasingly overshadow traditional search methods within Google and beyond.

And that brings us back to Amazon’s seemingly accidental accomplishment.

Amazon and Google: A tale of two AI journeys

What’s especially interesting about the slow-moving state of Amazon’s Alexa AI rollout is how it’s being presented as a negative by most market-watchers.

Back to that same Financial Times article I quoted a moment ago, the conclusion is unambiguous:

In June, Mihail Eric, a former machine learning scientist at Alexa and founding member of its “conversational modelling team,” said publicly that Amazon had “dropped the ball” on becoming “the unequivocal market leader in conversational AI” with Alexa.

But, ironically, that’s exactly where I see Amazon doing something admirable and creating that striking contrast between its efforts and those of Google and others in the industry.

The reality is that all these systems share those same foundational flaws. Remember: By the very nature of the technology, generative-AI-provided answers are woefully inconsistent and unreliable.

And yet, Google’s been going in overdrive to get Gemini into every possible place and get us all in the habit of relying on it for almost every imaginable purpose — including those where it simply isn’t reliable. (Remember my analogy from a minute ago? Yuuuuuup.)

In doing so, it’s chasing short-term market gains at the cost of long-term trust. All other variables aside, being wrong or misleading with basic information 20% of the time — or, heck, even just 10% of the time — is a pretty substantial problem. I’ve said it before, and I’ll say it again: If something is inaccurate or unreliable 10% of the time, it’s useful precisely 0% of the time.

And to be clear, the stakes here couldn’t be higher. In terms of their answer-offering and info-providing capabilities, Gemini and other such systems are being framed and certainly perceived as magical answer machines. Most people aren’t treating ’em with a hefty degree of skepticism and taking the time to ask all the right questions, verify answers, and so on. They’re asking questions, seeing or hearing answers, and then assuming they’re right.

And by golly, are they getting an awful lot of confidently stated inaccuracies as a result — something that, as we established a moment ago, is likely inevitable with this type of technology in its current state.

On some level, Google is clearly aware of this. The company had been developing the technology behind Gemini for years before rushing it out into the world following the success and attention around ChatGPT’s initial rollout — but, as had been said in numerous venues over time, it hadn’t felt like it was mature enough to be ready for public use.

So what changed? Not the nature of the technology — nope; by all counts, it was just the competitive pressure that forced Google to say “screw it, it’s good enough” and go all-in with systems that weren’t and still aren’t ready for primetime, at least with all of their promoted purposes.

And that, my fellow accuracy-obsessed armadillo, is where Amazon is getting it right. Rather than just rushing to replace Alexa with some new half-baked replacement, the company is actually waiting until it feels like it’s got the new system ready — with reliability, yes, but also with branding and a consistent-seeming user experience. (Anyone who’s been trying to navigate the comically complex web of Gemini and Assistant on Android — and beyond — can surely relate!)

Whether Amazon will keep up this pattern or eventually relent and go the “good enough” route remains to be seen. Sooner or later, investor pressure may force it to follow Google’s path and put its next-gen answer agent out there, even if it in all likelihood still isn’t ready by any reasonable standard.

For now, though, man: I can’t help but applaud the fact that the company’s taking its time instead of prematurely fumbling to the finish line like everyone else. And I can’t help but wish Google would have taken that same path, too, rather than doing its usual Google Thang™ and forcing some undercooked new concept into every last nook and cranny — no matter the consequences.

Maybe, hopefully, this’ll all settle out in some sensible way and turn into a positive in the future. For the moment, though, Google’s strategy sure seems like more of a minus than a plus for us, as users of its most important products — and especially in this arena, it sure seems like getting it right should mean more than getting it out into the world quickly, flaws and all and at any cost.

Get plain-English perspective on the news that matters with my free Android Intelligence newsletter — three things to know and try in your inbox each Friday.
https://www.computerworld.com/article/3806751/google-amazon-ai.html

Voir aussi

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Date Actuelle
mer. 22 janv. - 14:34 CET