Adobe’s Firefly AI text-to-video release raises tough questions

This still image was created using Adobe’s Firefly AI tool. Information about the image’s provenance (origin, AI use, history, etc) was embedded using the ‘Content Credentials’ tool. 

Oct. 17, 2024 — Adobe, the global digital design platform, released its latest AI video creation tool earlier this week. It’s impressive. And it raises some compelling questions about AI-generated video, safety, and transparency.

The company’s Firefly AI tool previously offered professional photographers, designers and editors the power to use AI to edit and enhance still images. Firefly’s upgraded version, released on Monday to existing customers on a limited basis, includes both AI text-to-video and image-to-video capability.

The upgrades are significant, as they allow anyone with the tool to create fully realized videos with just a few text prompts and a single image. Other tech companies have demonstrated products with the capability to do this, such as Microsoft’s VASA and OpenAI’s Sora, but have not yet released them to the general public. Sora is currently being tested by OpenAI red teams and select artists to assess critical areas for harms and gain feedback on user experience. Microsoft held back VASA due to a concern with the lack of safeguards around the technology, especially during the U.S. presidential election season.

will firefly have built-in ai disclosure?

In terms of AI safety and transparency, there’s good news and bad news in Adobe’s limited launch.

The bad news: text-to-video tools and image-to-video tools are coming at us faster than may be good for the safety of individuals and society. Malicious actors are already using OpenAI tools to disrupt elections around the world, according to a report issued by OpenAI last week. Those threats ranged from AI-generated website articles to social media posts by fake accounts.

The good news about Firefly AI video: It’s being released by Adobe, which has been an industry leader in creating and adopting AI disclosure tools.

C2PA provenance and ‘content credentials’

The company was an early partner in C2PA, the Coalition for Content Provenance and Authenticity, a group of roughly 100 AI-related companies that developed an open technical standard to help publishers, creators, and consumers trace the authenticity and history of digital content.

Adobe is also an early adopter of Content Credentials, a C2PA-compliant tool that embeds provenance data in an image and attaches a visible “cr” pin in the upper right corner.

An image with a “cr” pin looks like this:

Selecting the “cr” pin reveals basic information about the image’s origin and AI-related history. Selecting the  “Inspect” button opens a Content Credentials web page with a trove of information about the image’s creation and AI-enhanced revisions.

Adobe’s existing Firefly AI tools embed ‘Content Credentials’ into every AI-generated image, which give consumers insight into its authenticity. 

Will Firefly’s next-gen AI video tools come with Content Credentials? We would assume so, as previous generations of Firefly automatically applied Content Credentials metadata to still-image assets generated with the AI tool.

Adobe seems to be moving cautiously with Firefly, allowing it to be used only by existing customers, who must apply online to use the next-gen tools. This may give the company a chance to assess the dangers and risks in a sort of digital soft opening.

This is why ai transparency requirements are needed

Technological innovations in artificial intelligence are leaping ahead at shocking speed. ChatGPT was released to the public less than two years ago. Now we have text-to-video capabilities at our fingertips.

The legislative process is, by design, a slow-moving creature ill adapted to respond to instant technological change. Fortunately there are solutions at hand.

Last month Gov. Gavin Newsom signed the California AI Transparency Act into law. The Act, authored by state Sen. Josh Becker with input from AI transparency advocate Tom Kemp, empowers consumers to identify AI-generated content by requiring AI developers to embed metadata in AI-generated content that can identify it as such.

The Act doesn’t require those metadata tools until Jan. 1, 2026. But legislators in other states have expressed interest in passing similar legislation to codify the emerging disclosure standard and encouraging the adoption of ethical AI tools.

Adobe’s use of Content Credentials represents a positive step in the evolution of ethical and transparent AI. Disclosure laws like the California AI Transparency Act serve to bolster developers like Adobe who are embedding disclosure tools in their products and working toward the global adoption of C2PA standards. Without legal guardrails, unscrupulous competitors may introduce similar text-to-video AI tools that contain no such safeguards against deepfake deception.

The Transparency Coalition is working with policymakers in a number of states to craft appropriate bills that encourage AI innovation while protecting individuals and society. To contact our AI policy experts, reach out to us using the form below.

Connect with us

Previous
Previous

With Congress stalled, state legislators have taken real action on AI deepfakes.

Next
Next

Study finds Microsoft’s Copilot AI giving bad medical advice 26% of the time