Legislation for Transparency in AI now.

The Transparency Coalition is working to create AI safeguards for the greater good.

Artificial intelligence has the potential to be a powerful tool for human progress if properly trained, harnessed, and deployed.

We believe policies must be created to protect personal privacy, nurture innovation, and foster prosperity.

It begins with transparency.

We are focused on creating tools, systems, and policies that hold AI developers and deployers accountable for the legal and ethical creation and operation of AI systems.

Artificial intelligence is not too complex to understand. It’s engineering, not wizardry. Fair-minded business and government leaders have created systems to ensure the safety and transparency of the financial industry, the food and drug industries, and the aviation sector. We can and must do the same with artificial intelligence.

Photo by Igor Omilaev

we’re focused on training data. Here’s why.

Training data is the foundation of artificial intelligence.

It’s what AI systems like ChatGPT use to provide answers to the prompts we provide. It’s what generative image systems like Midjourney use to conjure AI-created art.

Deepfakes and other harmful uses of AI already receive considerable attention from policymakers. We’re focused on transparency in training data because it’s profoundly important and few others are taking on the challenge

Today, the misuse of that source material goes largely unchecked. That’s led to lawsuits alleging massive copyright infringement and the wrongful use of personal data. It’s also led to many wrong, misleading, or misinformed answers. See, for instance, Google’s infamous AI Overviews advice to mix Elmer’s glue into pizza sauce—an answer tied to scraped training data that treated a sarcastic Reddit comment as a serious fact.

Learn more about training data here.

Photo by Elimende Inagella

THE DANGER OF DOING NOTHING.

At its core, this is a safety issue.

Transparency and appropriate regulation have encouraged innovation and created consumer trust in many of America’s most robust industries. Think of medicine, finance, food processing, and aviation. Without rules, oversight, verification, and enforcement, bad actors could drive those critical sectors off the rails.

AI developers and deployers should be held to similar standards.

The dangers of creating and unleashing AI systems in a lawless environment are quickly becoming apparent. Abhorrent, abusive deepfake images are harming children in our schools. Online chatbots are wrongly accusing innocent adults of crimes like fraud, embezzlement, and worse. Singers, songwriters, and artists are being robbed of their own work—and their own identity.

These dangers aren’t limited to famous performers. The massive datasets used to train AI models can include personally identifiable information, known as PII. That information, once seen by an AI model, can’t be unseen. That same information may then be offered as an answer to a chatbot prompt.

There’s no reason training data shouldn’t be regulated. It’s the foundation on which all AI systems are built, and the single greatest factor in determining their accuracy. Failing to regulate training data presents a clear and present risk of consumer harm. We’re now learning a hard lesson from our negligence around social media. A generation of youth and young adults are struggling with mental health issues because we failed to recognize the need to enact safeguards around the development and use of social media.

We must not make the same mistake with AI. Now is the time to act.

Photo by Phil Desforges

The need to Act Now.

The speed at which AI technology is evolving doesn’t allow a wait-and-see attitude.

We need to start building a foundation for ethical AI today.

Billion-dollar tech giants like OpenAI are rushing to establish dominance, scraping the world’s data to create the most powerful AI systems ever built. Even tech company insiders are alarmed at the ‘culture of recklessness and secrecy’ surrounding AI’s all-consuming chase for growth and profit.

Tech founders too often fall back on the phrase ‘move fast and break things’ to justify reckless maneuvers. In the area of AI, the ‘things’ that get broken are us. The things are people, our personal privacy, our identities, our jobs, our opportunities, our lives.

Regulating a fair business opportunity for all means not only enforcing the regulations already in existence, but adding insight into AI model owner practices. Our solutions are linked below.

Important news in AI

Transparency Coalition AI Bill Tracker 2024

Track all the AI-related bills introduced in all 50 states during the 2024 state legislative session. Look for a fresh 2025 tracker coming in January.