MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
apple
Recherche

Apple Intelligence’s biggest problem isn’t the Intelligence–it’s Apple

mercredi 12 février 2025, 11:30 , par Macworld Reviews
Apple Intelligence’s biggest problem isn’t the Intelligence–it’s Apple
Macworld

Everyone knows that Apple is playing catch-up when it comes to Apple Intelligence. The company’s shipping AI models seem to be way behind the cutting edge, as OpenAI grows, Google pushes forward, and newcomers hit the scene.

I’m sure Apple is pouring everything it can into building better, more modern models, and we’ll hear about that effort in detail in June at WWDC. But what troubles me most about the Apple Intelligence rollout isn’t that Apple was caught flatfooted by the AI hype train and is struggling to catch up–it’s that Apple’s implementation of AI features also feels slapdash and rushed.

Apple doesn’t have to end up with the best large language model around to win the AI wars. It can be in the ballpark of the best or partner with the leaders to get what it needs. But it can’t fail at the part that is uniquely Apple: Making those features a pleasure to use, in the way we all expect from Apple. Right now, that’s where Apple is failing.

Apple’s best shot at AI’s worst

The worst thing about AI is that since much of it springs from the concept of a text-based language model, AI interfaces tend to be empty text boxes that you have to type something into. I can’t believe we’re back here. This is serious pre-1984 thinking, 40 years after Apple put a stake in the heart of the command-line interface.

Giving users an empty text box and expecting them to know what to say to get the result they want is a colossal user-interface failure. An empty text box is cruel. (And no, having to carefully issue abstract commands via voice is not a good alternative, nor is forcing users to laboriously correct mistaken output with additional text entry.)

The future of AI functionality needs to be built on a good user interface design that offers simple visual tools to step users through the process. This is where Apple can really make its mark, and I’m happy to report that in one area, it has really done it: image generation.




Image Playground may make some questionable images, but Apple is on the right track with the app’s UI.Foundry

I’m not a fan of the images Image Playground generates, but I have to give Apple credit for the interface it’s placed on top of its image-generation model. When you use Image Playground or create a Genmoji, Apple offers a proper interface that–while including a text box for suggestions–also offers a bunch of options you can scroll through and tap to add different suggestions and styles to the party. The stuff you enter in the text box is tokenized into floating elements. It’s an actual interface, and it works pretty well. Users don’t need to know about how the image-generation model is being run beneath the surface. Just let us make pictures.

And then there’s the rest

The image-generation interface really is Apple’s best take on AI design. Unfortunately, other Apple Intelligence interface elements don’t fare so well. The truth is, I don’t think macOS 15 and iOS 18 have exposed how far Apple is behind in AI as much as it’s exposed how short a time Apple’s designers had to create proper interfaces for all of that AI.

Let’s take Writing Tools, which can proofread, rewrite, and modify text. On the Mac, Apple’s APIs and apps have an existing system of spelling and grammar checking that offer a floating palette that lets you navigate through all the errors. On all its platforms, misspellings and grammar issues can be underlined and then tapped on for corrections.

Writing Tools seems to have been grafted on in parallel with this system. As Pixel Envy’s Nick Heer points out, it “manifests as a popover, [which] works a little bit like a contextual menu and a little like a panel while doing the job of neither very successfully.”

Not only is the Writing Tools interface brittle and messy, but it’s not integrated into any other text tools that Apple has built into its operating systems over the years! This is where we can really see how Apple’s engineers and designers had to rush to implement as many Apple Intelligence features as possible for year one.

AI-based writing tools should have been integrated into Apple’s overall approach to spelling and grammar, but instead they’ve been shoved into their own silo. As a result, they lack a lot of the niceties one might expect–for example, when you ask Writing Tools to proofread or rewrite something, it just changes your text and then lets you toggle between the edited and unedited text.

AI-based writing tools should have been integrated into Apple’s overall approach to spelling and grammar, but instead they’ve been shoved into their own silo. 

Contrast that with an existing, AI-powered proofing app, Grammarly, which (even in its very limited Grammarly Desktop version on Mac) underlines errors in your text editor of choice, displays suggested changes when you click or tap, and displays paragraph-long edits with strikethrough and color highlighting to indicate changes.

Hammer now, hammer later

The famous saying is that when you have a hammer, every problem looks like a nail. It’s clear that when Apple began its crash program to add Apple Intelligence to its operating systems, the goal was not to solve user problems but to insert AI features anywhere it could. This is the antithesis of Apple’s usual philosophy of solving problems rather than adopting the latest technology, and it has burned the company in some high-profile ways.

The most obvious is its use of an LLM to summarize notifications, including news updates. Many apps (including news apps) send way too many notifications, and it would be helpful to users if their phones could alleviate the pain.

I’m sure Apple’s software people have been discussing this issue for years. There are several ways they could have approached the problem, including building a new interface element for the Notification Center that rolled up multiple bubbles into one. A priority score attached to each notification would allow Apple to select the top ones to display, with a new interface to unroll the rest.

There are many ways to solve this problem—not just for news apps but also for other kinds of apps like security cameras and smart locks. However, most of them would be complex and involve modifying the Notification Center interface or Apple’s push-notification cloud service. They might even require developers of third-party Apps to adopt them. In short, it would take time.

Instead, Apple rushed: Given the drive to ship AI features, it shoved a nosy summarization LLM into Notification Center. It was probably the wrong tool for the job, but all Apple’s engineers were given was a hammer.

We’re not too many months away from the unveiling of the next round of Apple Intelligence features. Will Apple continue its reckless, messy sprint to catch up, or will it try to be a little more measured? This first wave of Apple Intelligence features are so rough, they desperately need some polish and reconsideration. Will they get it? Or will we be living with half-baked Writing Tools for years because the parties responsible have moved on to the next hurried feature drop?

The implementation of Image Playground gives me some hope that Apple still understands its biggest advantage when it comes to building AI: a focus on making users’ lives easier. But the rest of Apple Intelligence has me quite concerned that we’re in for a messy few years.
https://www.macworld.com/article/2605386/for-apple-to-succeed-at-ai-it-needs-to-focus-on-what-it-doe...

Voir aussi

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Date Actuelle
mer. 12 févr. - 16:28 CET