ITHACA, N.Y. – Fact-checkers may have a new tool in the fight against misinformation. A team of Cornell University researchers has developed a way to “watermark” light in videos, which they can use to detect if video footage is fake or has been manipulated.
The idea is to hide information in nearly-invisible fluctuations of lighting at important events and locations, such as interviews and press conferences or even entire buildings, like the United Nations Headquarters. These fluctuations are designed to go unnoticed by humans, but are recorded as a hidden watermark in any video captured under the special lighting, which could be programmed into computer screens, photography lamps and built-in lighting. Each watermarked light source has a secret code that can be used to check for the corresponding watermark in the video and reveal any malicious editing.
Peter Michael, a graduate student in the field of computer science who led the work, will present the study on August 10 at SIGGRAPH 2025 in Vancouver, British Columbia.
“Video used to be treated as a source of truth, but that’s no longer an assumption we can make,” said Abe Davis, assistant professor of computer science, who first conceived of the idea. “Now you can pretty much create video of whatever you want. That can be fun, but also problematic, because it’s only getting harder to tell what’s real.”
To address these concerns, researchers had previously designed techniques to watermark digital video files directly, with tiny changes to specific pixels that can be used to identify unmanipulated footage or tell if a video was created by AI. However, these approaches depend on the video creator using a specific camera or AI model – a level of compliance that may be unrealistic to expect from potential bad actors.
By embedding the code in the lighting, the new method ensures that any real video of the subject contains the secret watermark, regardless of who captured it. The team showed that programmable light sources, like computer screens and certain types of room lighting, can be coded with a small piece of software, while older lights, like many off-the-shelf lamps, can be coded by attaching a small computer chip about the size of a postage stamp. The program on the chip varies the brightness of the light according to the secret code.
So, what secret information is hidden in these watermarks, and how does it reveal when video is fake? “Each watermark carries a low-fidelity time-stamped version of the unmanipulated video under slightly different lighting. We call these code videos,” Davis said. “When someone manipulates a video, the manipulated parts start to contradict what we see in these code videos, which lets us see where changes were made. And if someone tries to generate fake video with AI, the resulting code videos just look like random variations.”
If an adversary cuts out footage, such as from an interview or political speech, a forensic analyst with the secret code can see the gaps. And if the adversary adds or replaces objects, the altered parts generally appear black in recovered code videos.
The team has successfully used up to three separate codes for different lights in the same scene. With each additional code, the patterns become more complicated and harder to fake.
“Even if an adversary knows the technique is being used and somehow figures out the codes, their job is still a lot harder,” Davis said. “Instead of faking the light for just one video, they have to fake each code video separately, and all those fakes have to agree with each other.”
The research team has also verified that this approach works in some outdoor settings and on people with different skin tones.
For additional information, read this Cornell Chronicle story.
-30-