Navigation
Recherche
|
AI cameras race for a real-time edge
vendredi 10 octobre 2025, 09:00 , par ComputerWorld
The latest AI imaging tools not only create a picture from a prompt (you type words, and the tool makes a picture seemingly ex nihilo).
They also do another neat trick: You can upload a photograph, and they can convincingly modify it. At the moment (this category moves really fast), no product does it better than Google’s Gemini 2.5 Flash Image, more affectionately known as “Nano Banana.” But you can also modify photos with MyEdit AI Image Editor, Fotor AI Photo Editor, DeepAI Photo Editor, LogoAI Image Editing, Gooey.AI Photo Editor, and Adobe Photoshop. All these tools vary in the ways and degrees to which they edit photos. But what they all have in common is that you have to show up with the photo already taken. Until now. A company called Camera Intelligence this week unveiled a highly innovative hardware peripheral for iPhones called Caira. And it does something very cool: it enables you to apply Nano Banana edits right after taking the picture. The device clicks to an iPhone 12 or newer via MagSafe. It’s an interchangeable-lens Micro Four Thirds mirrorless camera that uses the iPhone to run the app and provide the viewfinder. You can take a picture, then (from inside the app) — say you’d like to turn the dog in the photo into a velociraptor — Nano Banana makes the edit. You can then upload the finished image to social media, send to friends or do whatever you want with it. In addition to adding objects to pictures, you can also change the lighting, replace the background, add clothing and accessories to the people in the picture, or remove people who have fallen out of favor (just as Joseph Stalin used to do in the Soviet Union). Caira is available for pre-order via Kickstarter starting Oct. 30 and will retail for $995 when it arrives in January. (Early backers can buy the camera for $795.) Of course, smartphones generally use AI for what’s called “computational photography” as a matter of course. What’s most interesting to me about Caira is that it advances extreme AI modification in the image-making process — all the way to the camera itself. In that sense, it reminds me of another camera product, the Antigravity A1 drone. The airborne video-stitching camera drone The Antigravity A1 is the first-ever 8K all-in-one 360 drone, announced in late July and scheduled for sale in January. The drone does 360-degree video, with lenses positioned on top and bottom, supported by Insta360’s advanced image-stitching algorithm. (Insta360 is best known for hand-held 360-degree cameras.) What’s different is that the stitching is done in real-time. The drone will ship with Antigravity Vision goggles for immersive 360 live viewing with head-tracking and a Grip controller. To use it, you lash goggles to your face and fly the drone. As you’re flying, you can see through the drone’s cameras — you can look up, down, sideways and to the back, and see everything. What you don’t see is the drone. None of it. That’s because the real-time stitching algorithms erase the drone. The body, propellers, and arms are digitally removed from view instantly in real time. In fact, the camera views overlap, and AI uses the data from both cameras to piece together a complete 360 view without any drone parts. Other camera products are innovating by front-loading the AI processing. The real-time revolution Other innovative products beat the competition by front-loading AI processing in or near real-time. They include: Two product lines from Autel, the EVO Lite Enterprise series and the EVO II Pro V3, feature real-time onboard AI processing for enhanced imaging, including low-light video optimization and automated subject detection, with AI operations running locally before footage is transmitted or even saved to storage. The FlyPix AI Platform integrates AI processing directly on devices at the moment of image or video capture. Using edge hardware like Nvidia Jetson modules, FlyPix can achieve sub-100 millisecond latency for live analytics, allowing immediate object recognition and event alerts. IntelliVision AI Video Analytics applies AI processing directly at the edge (in the camera or local network node) rather than relying solely on cloud or centralized servers, allowing for real-time analysis and immediate actionable alerts. This reduces latency, minimizes bandwidth use, and improves privacy by processing sensitive video data closer to its source, according to the company. The Camio AI Security Platform applies AI processing at the very start of the video data pipeline, theoretically enabling organizations to monitor and respond to events faster. The solution enables users to describe in plain text the activities and policies they want detected; Camio’s AI interprets the query instantly as video or sensor data is captured. There are a smattering of other companies innovating with real-time application of AI processing, including the Spot AI Security Solution, HOVERAir X1 PRO and PROMAX, Lumeo AI Video Analytics, Lumana AI Analytics, Eagle Eye Networks Cloud VMS, IRIS+ AI Video Platform, and others. While we all obsess over the capabilities of AI image processing, we should take a moment and acknowledge the innovative application of exactly when those images are processed. By bringing powerful image editing and real-time analytics directly to or near the moment of capture, some companies are setting new standards for speed, flexibility, control, and security.
https://www.computerworld.com/article/4070482/ai-cameras-race-for-a-real-time-edge.html
Voir aussi |
56 sources (32 en français)
Date Actuelle
ven. 10 oct. - 18:24 CEST
|