AI Watermarking in a High Watermark State

Continuing with our blog series dealing with AI compliance, this week we tackle the challenges of California’s AI Transparency Act. As the saying goes, “as California goes, so goes the nation.” Thus, we look to California as a bellwether of what is to come in this space.

General Overview

SB-942, or the California AI Transparency Act, was signed into law by Governor Newsom in September 2024 and will become effective at the start of 2026. The purpose of this Act is to ensure digital content provenance and to combat (audio and visual) deepfakes, with the goal of protecting California-based businesses, the state government, and of course state residents.

Once in effect, the Act would require a “covered provider” to fulfill several obligations, including:

(1) providing at no-cost to the user, a publicly accessible AI detection tool;

(2) an option for the user to include a manifest disclosure on content created by the covered provider’s generative artificial intelligence (GenAI) system is AI-generated;

(3) include latent disclosure of AI-generated content that includes some method of identifying the provenance data of the content;

(4) a method to cease and/or revoke a third-party’s licensed GenAI system if the licensee no longer includes the disclosures listed above.

A “covered provider” is defined as a person that creates, codes, or otherwise produces a generative artificial intelligence system that has over 1,000,000 monthly visitors or users, and is publicly accessible within the geographic boundaries of the state

The Specifics of Each Requirement

(1) No-cost, publicly accessible AI detection tool

The tool must be able to assess whether image, video, or audio content, or content in any combination thereof, was created or altered by the covered provider’s GenAI system. Also, in-line with privacy standards, the tool cannot output any personal data that is detected in the content, nor shall it be able to collect or retain personal information from users of the covered provider’s AI detection tool. The only situation in which a covered provider may collect and retain consumer information is if a user submits feedback related to the efficacy of the tool. In that case, the covered provider may retain the contact information of the user if the user opts in to being contacted by the covered provider.

(2) Manifest disclosure on AI-generated content

A manifest disclosure, under this Act, is compliant if: (a) the disclosure identifies content as AI-generated; (b) the disclosure is clear, conspicuous, appropriate for the medium of the content, and understandable to a reasonable person; and (c) the disclosure is permanent or extraordinarily difficult to remove. The disclosure should also include the name of the covered provider, the name and version number of the Gen AI system, a timestamp, and a unique identifier, which should be detectable by the AI detection tool.

(3) Latent disclosure of provenance data

Under this Act, “provenance data” means data that is embedded into digital content, or that is included in the digital content’s metadata, for the purpose of verifying the digital content’s authenticity, origin, or history of modification. “System provenance data” is data that is not reasonably capable of being associated with a particular user and that contains either information regarding the type of device, system, or service that was used to generate digital content, or information related to content authenticity. Thus, the content provider needs to include some method of identifying system provenance data of the AI-generated content. Needless to say, any personal provenance data must be avoided altogether.

(4) Licensing a covered provider’s GenAI system to a third party

By contract, the covered provider must require the licensee to maintain the system’s capability to include the disclosures above. Should a licensee fail to include these disclosures, the covered provider has a 96-hour timeframe from the time of discovering the licensee’s action to revoke the license.

Closing Thoughts

With the ever-improving performance of GenAI systems, the line between fiction and reality continues to blur. Thus, the authenticity of what we see, hear, or experience is at risk, with a very real possibility of causing great harm, especially in the hands of malicious actors. This legislation is designed to help mitigate such harm by seeking to make that blurred line significantly defined and in focus. Current AI legislation (Colorado, Utah, Illinois) does not have such detailed disclosure requirements. Rather, such legislation merely requires that covered providers (or deployers/developers) disclose that a consumer is interacting with generative AI in the provision of the regulated services. In today’s digital world and the influence of social media, clear disclosure of AI usage is an important priority for states and interested parties alike for safety and transparency.

The information you obtain at this site, or this blog is not, nor is it intended to be, legal advice. You should consult an attorney for advice regarding your individual situation. We invite you to contact us through the website, email, phone, or through LinkedIn. Contacting us does not create an attorney-client relationship. Please do not send any confidential information to us until such time as an attorney-client relationship has been established.

Leave a Reply

Your email address will not be published. Required fields are marked *