Navigation
Recherche
|
Cornell Researchers Develop Invisible Light-Based Watermark To Detect Deepfakes
mercredi 13 août 2025, 03:25 , par Slashdot
![]() 'When someone manipulates a video, the manipulated parts start to contradict what we see in these code videos,' Davis said. 'And if someone tries to generate fake video with AI, the resulting code videos just look like random variations.' By comparing the coded patterns against the suspect footage, analysts can detect missing sequences, inserted objects, or altered scenes. For example, content removed from an interview would appear as visual gaps in the recovered code video, while fabricated elements would often show up as solid black areas. The researchers have demonstrated the use of up to three independent lighting codes within the same scene. This layering increases the complexity of the watermark and raises the difficulty for potential forgers, who would have to replicate multiple synchronized code videos that all match the visible footage. The concept is called noise-coded illumination and was presented on August 10 at SIGGRAPH 2025 in Vancouver, British Columbia. Read more of this story at Slashdot.
https://slashdot.org/story/25/08/12/2214243/cornell-researchers-develop-invisible-light-based-waterm...
Voir aussi |
56 sources (32 en français)
Date Actuelle
mer. 13 août - 20:46 CEST
|