AI Safeguards: Where to Start

At the Transparency Coalition we believe AI policy discussion and legislative action happen at many levels simultaneously. We’re not here to ban artificial intelligence, and we’re neither dismissing nor endorsing the catastrophic fears some have raised about AI’s evolution.

Our mission is to address known AI safety and privacy risks with practical solutions.

We’re focused on bringing transparency to both AI inputs and AI outputs.

·      Inputs are the huge datasets used to train the world’s most powerful AI models, which are harnessed in systems like ChatGPT, Midjourney, and Meta AI.

·      Outputs are the chatbot responses, the conjured audio, and the images and video created by AI systems.

 Start with training data and ai disclosure

Transparency Coalition’s leaders and subject matter experts are actively working with policy makers to craft legislation in those areas. One initial step is to require transparency around the training data used to create AI models. Another is to empower consumers to know when AI has been used to create or alter an image, video, or audio.

In other words: Disclose the nature of the data used to create the AI model, and disclose the use of AI in the creation of content.

Artificial intelligence is the most significant scientific-technological innovation of the 21st century—but it’s not some new mysterious wizardry. We have the capacity to construct commonsense guardrails that encourage benevolent innovation while protecting individuals and society from the very real risk of harm.

 Next: 

Previous
Previous

TCAI Guide to Search Tools: Was Your Data Used to Train an AI Model?

Next
Next

Input Safeguards: Require Transparency in AI Training Data