MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
not
Recherche

AI’s moment of disillusionment

lundi 8 juillet 2024, 11:00 , par InfoWorld
Well, that didn’t take long. After all the “this time it’s different” comments about artificial intelligence (We see you, John Chambers!), enterprises are coming to grips with reality. AI isn’t going to take your job. It’s not going to write your code. It’s not going to write all your marketing copy (not unless you’re prepared to hire back the humans to fix it). And, no, it’s nowhere near artificial general intelligence (AGI) and won’t be anytime soon. Possibly never.

That’s right: We’ve entered AI’s trough of disillusionment, when we collectively stop believing the singularity is just around the corner and start finding ways AI augments, not replaces, humans. For those new to the industry, and hence new to our collective tendency to overhype pretty much everything—blockchain, web3 (remember that?), serverless—this isn’t cause for alarm. AI will have its place; it simply won’t be every place.

So many foolish hopes

AI, whether generative AI, machine learning, deep learning, or you name it, was never going to be able to sustain the immense expectations we’ve foisted upon it. I suspect part of the reason we’ve let it run so far for so long is that it felt beyond our ability to understand. It was this magical thing, black-box algorithms that ingest prompts and create crazy-realistic images or text that sounds thoughtful and intelligent. And why not? The major large language models (LLMs) have all been trained on gazillions of examples of other people being thoughtful and intelligent, and tools like ChatGPT mimic back what they’ve “learned.”

The problem, however, is that LLMs don’t actually learn anything. They can’t reason. They’re great at pattern matching but not at extrapolating from past training data to future problems, as a recent IEEE study found. Software development has been one of the brightest spots for genAI tools, but perhaps not quite to the extent we’ve hoped. For example, GPT-3.5 lacked training data after 2021. As such, it struggled with easy coding problems on LeetCode that required information that came out after 2021. The study found that its success rate for easy problems plummeted from 89% to 52% and its ability to create code for hard coding problems tanked from 40% to 0.66%.

According to Michelle Hampson, the finding shows that ChatGPT “lacks the critical thinking skills of a human and can only address problems it has previously encountered.” Tim Klapdor less graciously states, “ChatGPT didn’t learn the topic, it did no research, it did no validation, and it contributed no novel thoughts, ideas, or concepts. ChatGPT just colonized all of that data … and now it can copy/paste that information to you in a timely manner because it’s spending $US700K a day on compute.” Ouch.

This doesn’t mean genAI is useless for software development or other areas, but it does mean we need to reset our expectations and approach.

We still haven’t learned

This letdown isn’t just an AI thing. We go through this process of inflated expectations and disillusionment with pretty much every shiny new technology. Even something as settled as cloud keeps getting kicked around. My InfoWorld colleague, David Linthicum, recently ripped into cloud computing, arguing that “the anticipated productivity gains and cost savings have not materialized, for the most part.” I think he’s overstating his case, but it’s hard to fault him, given how much we (myself included) sold cloud as the solution for pretty much every IT problem.

Linthicum has also taken serverless to task. “Serverless technology will continue to fade into the background due to the rise of other cloud computing paradigms, such as edge computing and microclouds,” he says. Why? Because these “introduced more nuanced solutions to the market with tailored approaches that cater to specific business needs rather than the one-size-fits-all of serverless computing.” I once suggested that serverless might displace Kubernetes and containers. I was wrong. Linthicum’s more measured approach feels correct because it follows what always seems to happen with big new trends: They don’t completely crater, they just stop pretending to solve all of our problems and instead get embraced for modest but still important applications.

This is where we’re heading with AI. I’m already seeing companies fail when they treat genAI as the answer to everything, but they are succeeding by using genAI as a complementary solution to some things. It’s not time to dump AI. Far from it. Rather, it’s time to become thoughtful about how and where to use it. Then, like so many trends before (open source, cloud, mobile, etc., etc.,) it will become a critical complement to how we work, rather than the only way we work.
https://www.infoworld.com/article/2514409/ais-moment-of-disillusionment.html

Voir aussi

News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Date Actuelle
dim. 22 déc. - 22:47 CET