As artificial intelligence continues to blur the lines between what’s real and what’s generated, Google DeepMind has unveiled SynthID, a groundbreaking watermarking technology that can invisibly tag AI-generated images, audio, video and even text.
The system is designed to help people identify when content has been created by AI models, offering a powerful tool in the fight against misinformation and misattribution.
What is SynthID?
SynthID is an invisible digital watermark embedded directly into the fabric of AI-generated media. Developed by Google DeepMind, the technology was first launched in 2023 for images, and has since expanded to cover text, audio and video produced through Google’s AI models, including Gemini, Imagen, Lyria and Veo.
Unlike visible watermarks or metadata tags that can be easily cropped or deleted, SynthID’s signature is hidden within the data itself. More than 10 billion pieces of content have already been marked using the technology, ensuring that creators and consumers alike can better understand when they’re engaging with synthetic media.
How It Works Across Different Media
For images and videos, SynthID subtly alters pixel values to embed a digital watermark without affecting visual quality. The mark survives even after common edits such as cropping, resizing or adding filters, making it nearly impossible to remove without destroying the content.
In audio, the watermark is embedded into the sound waveform during creation. It is completely inaudible to human ears and remains detectable even after compression, noise addition or playback speed changes. This is currently applied to audio made through Google’s Lyria model and Notebook LM podcast tools.
When it comes to text, the process is more intricate. SynthID adjusts the probability scores used by large language models as they generate each word. These subtle variations encode a hidden pattern, a kind of linguistic fingerprint that is imperceptible to readers but can later be detected by verification tools.
The SynthID Detector
To make verification easier, Google has launched the SynthID Detector, an online portal allowing users to upload content and check for SynthID watermarks.
The detector can analyse images, video segments, audio clips and text, highlighting which sections are most likely to contain the watermark. Journalists, researchers and media professionals are among the first invited to test the service before wider rollout.
This system provides three levels of confidence: “watermarked”, “not watermarked” and “possibly watermarked”, helping users interpret results with clarity.
Built to Resist Tampering
Google says SynthID is engineered to withstand typical modifications such as compression, filtering and translation. Unlike metadata, which can be stripped away, the SynthID watermark remains detectable even after extensive editing.
The technology has already been open-sourced for text watermarking, allowing developers to integrate it into their own AI systems. Google has also partnered with NVIDIA and GetReal Security to expand the ecosystem and promote shared standards for authenticity across the web.
A Step Toward Transparency
While not infallible, SynthID marks a major step toward digital transparency in an era of deepfakes and AI-generated misinformation.
“Being able to identify AI-generated content is critical to empowering people,” said Pushmeet Kohli, Google DeepMind’s VP of Science and Strategic Initiatives. “SynthID helps maintain trust between creators and users, and ensures we all know when AI is behind the curtain.”








