Watermarking alone will not solve AI-generated content abuse

Activities like spreading misinformation, creating fake explicit images of individuals, and producing images that infringe on copyright protection rights were possible before the emergence of artificial intelligence, but they’re now easier to do using generative AI. In response to these concerns, some policymakers have proposed the mandatory use of watermarks on all AI-generated content—a distinct and unique signal embedded in the AI content. However, there are significant technical limitations to applying watermarks to images and it is not a foolproof solution for stopping misinformation, deepfakes, or copyright violations.

Read the full article from the Center for Data Innovation here.

Previous
Previous

Transparency Coalition weighs in on California’s SB 1047: Watchdog or lapdog?

Next
Next

Rob Eleveld on the Gen Z Diplomat podcast: ‘Gen Z is paying for’ lack of tech guardrails