California Cracks Down on AI Deepfakes: New Laws to Protect Performers and Elections.

The new laws signed by Governor Newsom cover two primary areas: protecting performers from unauthorized digital replicas and regulating deepfakes in the context of elections.

In a significant week for AI legislation, California Governor Gavin Newsom signed into law five groundbreaking deepfake-related bills, focusing on both the entertainment industry and election integrity. These laws highlight the state’s proactive approach to addressing the challenges AI poses to privacy, authenticity, and democracy. However, one notable piece of legislation, Senate Bill 1047 (SB1047), has not yet been signed by the Governor, leaving some uncertainty about its future.

The new laws signed by Governor Newsom cover two primary areas: protecting performers from unauthorized digital replicas and regulating deepfakes in the context of elections.

Protecting Performers from Unauthorized Digital Replicas

Two of the signed bills address concerns over the unauthorized use of digital replicas—AI-generated likenesses or voices—of individuals, particularly performers. Assemblymember Rebecca Bauer-Kahan’s AB 1836 prohibits the commercial use of a deceased performer’s digital replica without permission from their estate. This law is essential for safeguarding the legacy of performers and ensuring their voice or likeness cannot be used in films, TV, video games, and other media without explicit consent.

Assemblymember Ash Kalra’s AB 2602 complements this by requiring contracts involving the use of digital replicas of a performer’s likeness or voice to be clearly defined, with the performer’s representation present in negotiations. The law aims to protect actors and artists from having their careers undermined by AI-generated content that replicates their unique attributes without consent or fair compensation.

Combating Deepfakes in Elections

Three newly signed bills focus on ensuring election integrity in the face of AI-generated content. Assemblymember Marc Berman’s AB 2655 requires large online platforms to either remove or label deceptive, AI-altered content related to elections during specified periods. This measure empowers officials and candidates to seek legal remedies if platforms fail to comply, helping to curb the spread of misleading digital content.

Assemblymember Gail Pellerin’s AB 2839, an urgency measure, extends the timeframe during which AI-generated election materials are regulated. The bill also broadens the scope of existing laws by prohibiting the distribution of materially deceptive AI content about elected officials, candidates, and elections.

Similarly, Assemblymember Wendy Carrillo’s AB 2355 mandates that electoral advertisements using AI-generated or heavily altered content include a disclosure, ensuring that voters are aware when political materials have been digitally manipulated. This law aims to increase transparency and protect voters from being deceived by false AI-generated content.

The Impact of These Laws

California's efforts to tackle AI misuse are part of a broader conversation about AI governance. The state's focus on protecting privacy, intellectual property, and election integrity aligns with its position as a tech hub and a leader in legislative innovation. These laws build on previous deepfake regulations like AB 730, which addressed deepfake campaign videos, and AB 602, which provided recourse for individuals whose likeness was used in pornographic deepfakes.

What’s Next?

While these five bills are now law, SB1047—another AI-related bill—remains unsigned, leaving room for speculation about its potential implications. It may further expand on California's approach to AI governance, particularly in sectors not yet covered by the current laws. Stakeholders in AI, entertainment, and politics should keep an eye on Newsom’s decision regarding SB1047, as it could add another layer of regulation to California’s expanding AI legal framework.

In conclusion, as AI technology continues to evolve, California is setting the pace with comprehensive laws to mitigate its potential harm. From protecting the legacy of deceased performers to ensuring election integrity, these measures show the state’s commitment to responsible AI governance.

Resources from AIGG on your AI Journey

Is your organization ready to navigate the complexities of AI with confidence?

At AiGg, we understand that adopting AI isn’t just about the technology—it’s about doing so responsibly, ethically, and with a focus on protecting privacy. We’ve been through business transformations before, and we’re here to guide you every step of the way.

 Whether you’re a government agency, school district, or business, our team of experts—including attorneys, anthropologists, data scientists, and business leaders—can help you craft Strategic AI Use Statements that align with your goals and values. We’ll also equip you with the knowledge and tools to build your playbooks, guidelines, and guardrails as you embrace AI.

Don’t leave your AI journey to chance.

Connect with us today for your free AI Tools Adoption Checklist, Legal and Operational Issues List, and HR Handbook policy. Or, schedule a bespoke workshop to ensure your organization makes AI work safely and advantageously for you.

Your next step is simple—reach out and start your journey towards safe, strategic AI adoption with AiGg.

Let’s invite AI in on our own terms.

Previous
Previous

Transforming City Services: The Power of AI in Citizen Communication

Next
Next

How AI is Revolutionizing Business Development and Marketing for Professional Service