OpenAI to Test Watermarking for GPT-4o Images
According to various foreign media reports, OpenAI has recently begun a new test on the image generation feature of the latest ChatGPT-4o, adding the “ImageGen” watermark to images generated for free users.
Speculation suggests that the possible reason for this new test is the increasing number of free users utilizing the ImageGen model to create images similar to those of Studio Ghibli. An AI researcher, Tibor Blaho, pointed out on the social media platform Threads that OpenAI is experimenting with both “visible” and “invisible” watermark formats, primarily targeting free ChatGPT users. The images generated will be marked with a prominently visible “ImageGen” watermark, while paid subscribers will not receive watermarked image outputs.
OpenAI Introduces Dual Verification Mechanism with AI Watermark and Invisible Metadata
According to WinBuzzer, in addition to the visible ImageGen watermark, OpenAI’s strategy also includes a “dual verification mechanism.” This means that embedded within the generated images is invisible metadata compliant with the Content Authenticity Initiative (C2PA) standards, which identifies timestamps, software labels, and source markings, among other details, to ensure the validity of the content’s origin. For example, OpenAI has deployed C2PA metadata in the DALL·E 3 image generation to track content sources.
However, OpenAI has previously acknowledged that the metadata verification system has certain limitations; if an image is cropped, screenshotted, or uploaded to platforms capable of removing metadata, these invisible markings can become ineffective. Nevertheless, OpenAI remains proactive in supporting legal regulations for AI watermarking technology. For instance, OpenAI, along with Adobe and Microsoft, supports California’s AB 3211 bill, which mandates technology companies to label AI-generated content to reduce the risk of misinformation dissemination.
Google and Microsoft Also Implement AI Watermarking
Not only OpenAI, but many tech giants are also seeking ways to authenticate AI-generated content. For example, Google plans to expand the use of the SynthID system, developed through DeepMind, to Google Photos by February 2025. This technology can be applied not only to images entirely generated by AI but also to those that have been edited by AI, embedding invisible watermarks directly within the pixels of the images.
Microsoft, through its Azure OpenAI service, introduced watermarking technology in September 2024, embedding encrypted metadata into images generated by DALL·E. This metadata records who generated the image, when it was generated, and which software was used. Microsoft is also collaborating with Adobe, Truepic, and the BBC to establish a unified content verification standard across different platforms.
Can Watermarking Technology Be Challenged?
Watermarking technology is not impervious to cracking. In October 2023, researchers from the University of Maryland published a paper indicating that AI watermarks could be breached using a method known as “diffusion purification.” By adding noise to an image and then removing that noise, it is possible to effectively eliminate those invisible watermarks. Furthermore, diffusion purification can also forge fake watermarks, making an image appear as though it were generated by AI.
The research team stated that relying solely on watermarks may not provide adequate protection against media manipulation or misinformation.
This article is a collaborative reprint from: Digital Age
Source: WinBuzzer, Bleeping Computer