Output Safeguards: Disclose the Use of AI
In 2024, it’s still possible to spot the flaws in AI-generated images and video—to tell fake from fact. Within a year or two, however, GenAI technology will reach a point where it’s nearly impossible to tell the difference with the naked eye.
That’s why it’s imperative to adopt safeguards now.
At the Transparency Coalition we believe the most important AI output provision is also the most basic: Disclose the use of AI.
Model legislation for ai disclosure
California’s AI Transparency Act, adopted in Sept. 2024, provides a model for this kind of disclosure. The Act requires AI developers to embed detection tools in the media their AI creates, and to post an AI decoder tool on their website. The decoder tool would allow consumers to upload digital media and discover whether the developer’s AI was used to create that content.
This technology already exists, and a few ethical AI developers are showing how it can be used to empower consumers. It’s up to policymakers at the state and national level to require it of all developers—for the common good.
Next: